Abstract
This chapter looks at real-world examples of data poisoning attacks in federated learning (FL) systems, where harmful participants change local data to hurt the accuracy or integrity of global models. It uses case studies from healthcare, finance, the Internet of Things (IoT), and autonomous vehicles to show how these attacks happen, their effects on system performance, and the lessons learned. Each example includes documented incidents, academic research, or industry reports, giving practical and theoretical views. The chapter also points out common patterns across industries, describes vulnerabilities that are often targeted, and shares practical ways to reduce risks seen in real deployments. Examining these scenarios provides helpful insights for researchers, developers, and security experts looking to strengthen FL systems against threats while keeping privacy and efficiency intact.
| Original language | English |
|---|---|
| Title of host publication | Adversarial AI and Data Poisoning in Federated Learning |
| Editors | Vipul Jain, Shikha Khullar, Manju Lata Joshi, Deepak Kumar Jain |
| Publisher | IGI Global |
| Pages | 505-536 |
| Number of pages | 32 |
| ISBN (Electronic) | 9798337362267 |
| ISBN (Print) | 9798337362243, 9798337362250 |
| DOIs | |
| Publication status | Published - 14 Nov 2025 |
Fingerprint
Dive into the research topics of 'Real-World Case Studies of Data Poisoning Attacks in Federated Learning Applications'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver