AI & ML Nature Is Weird

AI voice assistants can be tricked into 'hearing' voices and events that never actually happened with near-perfect accuracy.

April 1, 2026

Original Paper

Audio Hallucination Attacks: Probing the Reliability of Large Audio Language Models

Ashish Seth, Sonal Kumar, Ramaneswaran Selvakumar, Nishit Anand, Utkarsh Tyagi, Prem Seetharaman, Ramani Duraiswami, Dinesh Manocha

arXiv · 2603.29263

The Takeaway

By injecting synthetic signals into audio streams, researchers made state-of-the-art AI models hallucinate specific commands and non-existent sounds. This reveals that AI perceives the world through mathematical patterns fundamentally different from human hearing, creating a massive reliability gap.

From the abstract

Large Audio Language Models (LALMs) achieve strong performance on audio-language tasks; however, their reliability in real-world settings remains underexplored. We introduce Audio Hallucination Attacks (AHA), an attack suite called AHA-Eval, comprising 6.5K QA pairs designed to test whether LALMs genuinely ground their responses in the audio input. AHA targets two attack surfaces: (i) query-based attacks, which exploit question structure to induce hallucinations about absent sounds, and (ii) aud