AI & ML Breaks Assumption

Demonstrates that frontier LLMs fail at diagnostic reasoning in safety-critical robotics even when provided with perfect procedural knowledge.

March 31, 2026

Original Paper

Why Cognitive Robotics Matters: Lessons from OntoAgent and LLM Deployment in HARMONIC for Safety-Critical Robot Teaming

Sanjay Oruganti, Sergei Nirenburg, Marjorie McShane, Jesse English, Michael Roberts, Christian Arndt, Ramviyas Parasuraman, Luis Sentis

arXiv · 2603.26730

The Takeaway

The paper provides empirical evidence that LLM reasoning failures in robotics are architectural rather than knowledge-based, as models struggle with metacognitive self-monitoring and consequence-based selection. It challenges the trend of using LLMs as primary 'brains' for safety-critical embodied agents.

From the abstract

Deploying embodied AI agents in the physical world demands cognitive capabilities for long-horizon planning that execute reliably, deterministically, and transparently. We present HARMONIC, a cognitive-robotic architecture that pairs OntoAgent, a content-centric cognitive architecture providing metacognitive self-monitoring, domain-grounded diagnosis, and consequence-based action selection over ontologically structured knowledge, with a modular reactive tactical layer. HARMONIC's modular design