Skip to main navigation Skip to search Skip to main content

Real-World Case Studies of Data Poisoning Attacks in Federated Learning Applications

Shradha Sonawane, Gitanjali Shinde, Grishma Bobhate, Sonal Fatangare, Sharnil Pandya

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

This chapter looks at real-world examples of data poisoning attacks in federated learning (FL) systems, where harmful participants change local data to hurt the accuracy or integrity of global models. It uses case studies from healthcare, finance, the Internet of Things (IoT), and autonomous vehicles to show how these attacks happen, their effects on system performance, and the lessons learned. Each example includes documented incidents, academic research, or industry reports, giving practical and theoretical views. The chapter also points out common patterns across industries, describes vulnerabilities that are often targeted, and shares practical ways to reduce risks seen in real deployments. Examining these scenarios provides helpful insights for researchers, developers, and security experts looking to strengthen FL systems against threats while keeping privacy and efficiency intact.

Original languageEnglish
Title of host publicationAdversarial AI and Data Poisoning in Federated Learning
EditorsVipul Jain, Shikha Khullar, Manju Lata Joshi, Deepak Kumar Jain
PublisherIGI Global
Pages505-536
Number of pages32
ISBN (Electronic)9798337362267
ISBN (Print)9798337362243, 9798337362250
DOIs
Publication statusPublished - 14 Nov 2025

Fingerprint

Dive into the research topics of 'Real-World Case Studies of Data Poisoning Attacks in Federated Learning Applications'. Together they form a unique fingerprint.

Cite this