AI companies use terms like hallucination and agent to trick the public into thinking software has a human personality.
April 24, 2026
Original Paper
Strategic Polysemy in AI Discourse: A Philosophical Analysis of Language, Hype, and Power
arXiv · 2604.21043
The Takeaway
Technical language in the artificial intelligence industry acts as a strategic tool to avoid legal accountability while maintaining high market hype. The use of anthropomorphic terms encourages users to attribute consciousness and agency to systems that are fundamentally just predictive math. While the public hears words suggesting human-like errors, the narrow technical definitions allow companies to sidestep liability for harmful outputs. This linguistic tactic is known as glosslighting because it exploits the gap between common meaning and jargon. Policymakers must look past these metaphors to regulate the actual software functions rather than the simulated personas.
From the abstract
This paper examines the strategic use of language in contemporary artificial intelligence (AI) discourse, focusing on the widespread adoption of metaphorical or colloquial terms like "hallucination", "chain-of-thought", "introspection", "language model", "alignment", and "agent". We argue that many such terms exhibit strategic polysemy: they sustain multiple interpretations simultaneously, combining narrow technical definitions with broader anthropomorphic or common-sense associations. In contem