AI models have a dangerous habit of making up a definitive answer for messy social situations that have no clear solution.
April 29, 2026
Original Paper
What Did They Mean? How LLMs Resolve Ambiguous Social Situations across Perspectives and Roles
arXiv · 2604.23942
The Takeaway
Large language models almost always force a single interpretation on ambiguous human interactions. They tend to ignore the uncertainty and multiple perspectives that define real-world relationships. This behavior creates a sense of false closure that can make unresolved tensions feel settled when they actually are not. People using AI for social advice might walk away with a distorted view of their friends or partners. The machine prioritizes a neat answer over the complex and messy truth of human social life.
From the abstract
People increasingly turn to large language models (LLMs) to interpret ambiguous social situations: a delayed text reply, an unusually cold supervisor, a teacher's mixed signals, or a boundary-crossing friend. Yet in many such cases, no stable interpretation can be verified from the available evidence alone. We study how LLMs respond to these situations across four domains: early-stage romantic relationships, teacher--student dynamics, workplace hierarchies, and ambiguous friendships. Across 72 r