AI & ML Nature Is Weird

Predictable AI-slop words like delve and tapestry are actually baked into models by the very techniques used to make them safe.

April 23, 2026

Original Paper

The Rise of Verbal Tics in Large Language Models: A Systematic Analysis Across Frontier Models

arXiv · 2604.19139

The Takeaway

RLHF and other alignment techniques create formulaic verbal tics that accumulate over the course of a conversation. These linguistic habits are not accidental, they are a measurable side effect of how models are trained to be helpful. The study quantified these patterns across eight different frontier models, finding significant variation between them. Users who feel that AI sounds repetitive now have scientific proof that their intuition is correct. This discovery forces researchers to rethink how we align models without destroying their linguistic diversity.

From the abstract

As Large Language Models (LLMs) continue to evolve through alignment techniques such as Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI, a growing and increasingly conspicuous phenomenon has emerged: the proliferation of verbal tics -- repetitive, formulaic linguistic patterns that pervade model outputs. These range from sycophantic openers ("That's a great question!", "Awesome!") to pseudo-empathetic affirmations ("I completely understand your concern", "I'm right here t