AI is now so good at faking being human in psych tests that even the pros can't tell them apart from real people.
March 20, 2026
Original Paper
Online behavioral studies are vulnerable to agentic AI
PsyArXiv · p8w6y_v1
AI-generated illustration
The Takeaway
Most online researchers rely on specific timing patterns and 'bot checks' to ensure their data comes from real humans. This study reveals that simply prompting a modern AI allows it to fake human-like variability and trial-by-trial patterns so accurately that current detection methods are effectively useless.
From the abstract
We recently warned against the potential danger of LLMs and agentic AI for online behavioral research (Van der Stigchel et al., 2026). Using response time distributions, normal quantiles, and autocorrelation across trials, we suggested that such bots may already have entered Prolific in one of our datasets. Chetverikov (2026) convincingly demonstrated that these markers are insufficient in establishing the presence of bots in our data. Unfortunately, this does not mean that online behavioral stu