AI & ML Nature Is Weird

Artificial intelligence views the entire world through a Western lens that makes cities in the global south look naturally more dangerous and poor.

April 23, 2026

Original Paper

Large language models perceive cities through a culturally uneven baseline

Rong Zhao, Wanqi Liu, Zhizhou Sha, Nanxi Su, Yecheng Zhang

arXiv · 2604.20048

The Takeaway

Large language models do not provide a neutral global perspective on urban environments. They use a Western-centric baseline as the default state for safety, beauty, and wealth. When an AI describes a city in Africa or Asia, it often judges it against the ordinary standard of a North American or European suburb. This bias distorts how policies are shaped and how resources are allocated globally. Developers must realize that AI mirrors the specific cultural geography of its training data rather than objective reality.

From the abstract

Large language models (LLMs) are increasingly used to describe, evaluate and interpret places, yet it remains unclear whether they do so from a culturally neutral standpoint. Here we test urban perception in frontier LLMs using a balanced global street-view sample and prompts that either remain neutral or invoke different regional cultural standpoints. Across open-ended descriptions and structured place judgments, the neutral condition proved not to be neutral in practice. Prompts associated wit