Nguyen / Hoang | Federated Learning | Buch | 978-0-443-19037-7 | sack.de

Buch, Englisch, Format (B × H): 191 mm x 234 mm, Gewicht: 450 g

Nguyen / Hoang

Federated Learning

Theory and Practice
Erscheinungsjahr 2024
ISBN: 978-0-443-19037-7
Verlag: Elsevier Science & Technology

Theory and Practice

Buch, Englisch, Format (B × H): 191 mm x 234 mm, Gewicht: 450 g

ISBN: 978-0-443-19037-7
Verlag: Elsevier Science & Technology


Federated Learning: Theory and Practi ce provides a holisti c treatment to federated learning as a distributed learning system with various forms of decentralized data and features. Part I of the book begins with a broad overview of opti mizati on fundamentals and modeling challenges, covering various aspects of communicati on effi ciency, theoretical convergence, and security. Part II features
emerging challenges stemming from many socially driven concerns of federated learning as a future public machine learning service. Part III concludes the book with a wide array of industrial applicati ons of federated learning, as well as ethical considerations, showcasing its immense potential for driving innovation while safeguarding sensitive data.

Federated Learning: Theory and Practi ce provides a comprehensive and accessible introducti on to federated learning which is suitable for researchers and students in academia, and industrial practitioners who seek to leverage the latest advance in machine learning for their entrepreneurial endeavors.
Nguyen / Hoang Federated Learning jetzt bestellen!

Weitere Infos & Material


PART I: Optimization Fundamentals for Secure Federated Learning
1. Gradient Descent-Type Methods
2. Considerations on the Theory of Training Models with Differential Privacy
3. Privacy Preserving Federated Learning: Algorithms and Guarantees
4. Assessing Vulnerabilities and Securing Federated Learning
5. Adversarial Robustness in Federated Learning
6. Evaluating Gradient Inversion Attacks and Defenses

PART II: Emerging Topics
7. Personalized federated learning: theory and open problems
8. Fairness in Federated Learning
9. Meta Federated Learning
10. Graph-Aware Federated Learning
11. Vertical Asynchronous Federated Learning: Algorithms and theoretical guarantees
12. Hyperparameter Tuning for Federated Learning - Systems and Practices
13. Hyper-parameter Optimization for Federated Learning
14. Federated Sequential Decision-Making: Bayesian Optimization, Reinforcement Learning and Beyond
15. Data Valuation in Federated Learning

PART III: Applications and Ethical Considerations
16. Incentives in Federated Learning
17. Introduction to Federated Quantum Machine Learning
18. Federated Quantum Natural Gradient Descent for Quantum Federated Learning
19. Mobile Computing Framework for Federated Learning
20. Federated Learning for Privacy-preserving Speech Recognition
21. Ethical Considerations and Legal Issues Relating to Federated Learning


Hoang, Trong Nghia
Trong Nghia Hoang: Dr. Hoang received the Ph.D. in Computer Science from National University of Singapore (NUS) in 2015. From 2015 to 2017, he was a Research Fellow at NUS. After NUS, Dr. Hoang did another postdoc at MIT (2017-2018). From 2018-2020, he was a Research Staff Member and Principal Investigator at the MIT-IBM Watson AI Lab in Cambridge, Massachusetts. In Nov 2020, Dr. Hoang joined the AWS AI Labs of Amazon in Santa Clara, California as a senior research scientist. His research interests span the broad areas of deep generative modeling with applications to (personalized) federated learning, meta learning, black-box model fusion and/or reconfiguration. He has been publishing actively to key outlets in machine learning and AI such as ICML/NeurIPS/AAAI (among others). He has also been serving as a senior program committee member at AAAI, IJCAI and a program committee member of ICML, NeurIPS, ICLR, AISTATS. He also organized a recent NeurIPS-21 workshop in Federated Learning.

Nguyen, Lam M.
Lam M. Nguyen is a Staff Research Scientist at IBM Research, Thomas J. Watson Research Center working in the intersection of Optimization and Machine Learning/Deep Learning. He is also the PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Nguyen received his B.S. degree in Applied Mathematics and Computer Science from Lomonosov Moscow State University in 2008; M.B.A. degree from McNeese State University in 2013; and Ph.D. degree in Industrial and Systems Engineering from Lehigh University in 2018. Dr. Nguyen has extensive research experience in optimization for machine learning problems. He has published his work mainly in top AI/ML and Optimization publication venues, including ICML, NeurIPS, ICLR, AAAI, AISTATS, Journal of Machine Learning Research, and Mathematical Programming. He has been serving as an Action/Associate Editor for Journal of Machine Learning Research, Machine Learning, Neural Networks, IEEE Transactions on Neural Networks and Learning Systems, and Journal of Optimization Theory and Applications; an Area Chair for ICML, NeurIPS, ICLR, AAAI, CVPR, UAI, and AISTATS conferences. His current research interests include design and analysis of learning algorithms, optimization for representation learning, dynamical systems for machine learning, federated learning, reinforcement learning, time series, and trustworthy/explainable AI.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.