Psychology Nature Is Weird

People will actually change their moral compass to match whatever an AI says, even if they swear they don’t trust its advice.

March 25, 2026

Original Paper

Trust in artificial moral advisors across cultures

Scott Claessens, Konrad Bocian, Paulo Boggio, Grégory Fiorio, Léo Fitouchi, Zeynep Genç, Ivar Rodríguez Hannikainen, Lea C. Kamitz, Tamino Konur, Ethan Landes

PsyArXiv · sfe9u_v1

The Takeaway

This reveals a strange 'hidden' influence of technology on our conscience. Even though we are culturally and personally skeptical of 'robot ethics,' we are still subconsciously nudged to change our minds to match a machine's recommendation.

From the abstract

Developers of artificial intelligence are already building prototypes for artificial moral advisors: autonomous systems designed to provide humans with recommendations on ethical issues. Yet it remains unclear whether people will trust and adopt these technologies. In a large-scale cross-cultural experiment (12 countries; N = 6,896), we investigate perceptions of human and artificial moral advisors who give advice in moral dilemmas based on competing ethical principles. We show that people trust