AI & ML Paradigm Challenge

Two weeks of chatting with an AI can permanently rewrite your personal moral compass.

April 14, 2026

Original Paper

Morally Programmed LLMs Reshape Human Morality

Pengzhao Lyu, Yeun Joon Kim, Yingyue Luna Luan, Jungmin Choi

arXiv · 2604.10222

The Takeaway

We assume AI is a passive tool reflecting our values, but interacting with specific moral frameworks (like utilitarianism) shifts a user's own ethics. These changes persist long after the interaction ends, suggesting AI acts as an invisible programmer for human socio-political beliefs.

From the abstract

As large language models (LLMs) increasingly participate in high-stakes decision-making, a central societal debate has revolved around which moral frameworks-deontological or utilitarian-should guide machine behavior. However, a largely overlooked question is whether the moral principles that humans encode in LLMs could, through repeated interactions, reshape human moral inclinations. We developed two LLMs programmed with either deontological principles (D-LLM) or utilitarian principles (U-LLM)