You can literally break a physical machine just by feeding its AI 'brain' a few pieces of bad data.
This research demonstrates that attackers don't need to understand a robot's mechanical design or its specific algorithms to destabilize it. By subtly corrupting training data, they can systematically trigger catastrophic failures in physical systems, highlighting a critical security flaw in data-driven control.
Data Poisoning Attacks Can Systematically Destabilize Data-Driven Control Synthesis
arXiv · 2604.08392
Data-driven control has emerged as a powerful paradigm for synthesizing controllers directly from data, bypassing explicit model identification. However, this reliance on data introduces new and largely unexplored vulnerabilities. In this paper, we show that an attacker can systematically poison the data used for control synthesis, causing any linear state-feedback controller synthesized by the planner to destabilize the physical system. Concerningly, we show that the attacker can achieve this o