SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  Economics

If you give an AI agent a little bit of 'social' personality, humans are way more likely to forgive it when it screws up.

We often think we judge AI strictly by its logic or results, but adding human-like social cues 'tricks' our moral compass into being more tolerant of its mistakes. This means 'friendly' AI agents can get away with unethical decisions that a cold, robotic AI would be condemned for.

Original Paper

AI social connectedness increases sensitivity to moral norms and tolerance of norm violations

Reem Ayad, Jason E. Plaks

SSRN  ·  6428318

On which moral bases do people evaluate the decisions made by AI agents in sacrificial dilemmas? Across two studies (N = 1,992), we examined whether people base these evaluations primarily on the decision’s outcomes or its alignment with moral norms, and whether this evaluative basis shifts as a function of the AI agent’s degree of social connectedness. Using the CNI multinomial model, Study 1 showed that decisions made by socially connected AI agents are evaluated primarily in terms of their al