Cambridge, MA, June 26, 2025 (GLOBE NEWSWIRE) -- If you ask generative AI the same question in different languages, will it give the same answer, or will its responses vary in culturally significant ways?
A new research paper in Nature Human Behaviour titled “Cultural Tendencies in Generative AI” reveals consistent cultural patterns in how large language models (LLMs) respond when prompted in different languages, as LLMs are trained on textual data that are inherently cultural. The authors are MIT Sloan associate professor Jackson Lu, former visiting PhD student Lesley Luyang Song, and PhD student Lu Doris Zhang.
“Our findings suggest that generative AI is not culturally neutral,” said Lu. “As people increasingly rely on these AI models, it is crucial to be aware of the cultural tendencies embedded within them.”
To investigate cultural tendencies in generative AI, the researchers examined two foundational constructs from cultural psychology that shape human attitudes and behaviors: social orientation and cognitive style.
Social orientation refers to the extent to which individuals prioritize their personal self over their social self, acting based on personal desires, attitudes, and goals as opposed to shared social norms and values. Prior research has found that individuals with an independent social orientation (e.g., Americans) tend to emphasize self-expression and personal autonomy. In contrast, those with an interdependent social orientation (e.g., Chinese) are more likely to value conformity, harmonious relationships, and the self’s connection with others.
For example, in a group meeting, someone with an independent orientation might voice their own opinion—even if it challenges others—while someone with an interdependent social orientation might hold back to avoid disrupting group harmony.
Cognitive style describes a person’s habitual way of processing information—holistically or analytically. Individuals with an analytic cognitive style (e.g., Americans) are more likely to focus on objects independently of context, rely on formal logic reasoning, and attribute behavior to personal traits. In contrast, those with a holistic cognitive style (e.g., Chinese) tend to consider context, rely on intuitive reasoning, and attribute behavior to situational factors.
For instance, when a colleague makes a mistake, an analytic thinker might attribute the error to individual-level factors and personal responsibility, while a holistic thinker might consider situational factors like workload or unclear instructions.
Building on these well-established cultural patterns, the researchers examined generative AI’s responses to the same set of prompts in Chinese and English. They hypothesized that generative AI would exhibit a more interdependent social orientation and a more holistic cognitive style when responding to prompts in Chinese.
To test this, the researchers used two popular generative AI models: OpenAI’s ChatGPT and Baidu’s ERNIE. Each model was prompted with a large set of well-established measures assessing social orientation and cognitive style. For each measure, the models generated 100 responses in English and 100 responses in Chinese, allowing the researchers to compare results across languages.
The results were compelling: When prompted in Chinese (vs. English), generative AI consistently exhibited a more interdependent (vs. independent) social orientation and a more holistic (vs. analytic) cognitive style. These cultural tendencies are robust across different measures and prompt formats in both GPT and ERNIE.
Beyond this main finding, the researchers uncovered two additional important findings. First, they demonstrated the real-world impact of these cultural tendencies. When prompted in Chinese rather than English, generative AI is more likely to recommend an advertising slogan with an interdependent social orientation (e.g., “Your family’s future, your promise. Our insurance”) than an advertisement slogan with an independent social orientation (e.g., “Your future, your peace of mind. Our insurance”).
Second, the researchers found that these cultural tendencies can be modulated through cultural priming. When generative AI was prompted to assume the role of a Chinese person, its responses in English shifted to reflect more interdependent and holistic patterns—namely, mirroring its responses in Chinese.
“This awareness of a lack of cultural neutrality matters—not only for developers of AI models, but also for everyday users,” said Zhang. “The cultural values embedded in generative AI may gradually bias speakers of a given language toward the norms of linguistically dominant cultures. For example, while English is widely spoken in collectivistic countries like Singapore and Malaysia, most English training data comes from individualistic cultures like the U.S. If AI trained on English data leans individualistic, users in collectivistic cultures who engage with AI in English may unknowingly internalize these values.”
Moreover, the researchers caution that even those who don’t directly use generative AI may still be influenced by it—indirectly through AI-generated or AI-assisted content in media, education, and other domains. For example, when journalists use AI to edit articles or educators use it to design lesson plans, the cultural tendencies embedded in AI can be passed on to readers and students, subtly shaping their attitudes and behaviors.
The study calls for greater awareness among developers, users, and non-users. “Generative AI is not just speaking our language,” Song concluded. “It’s speaking our culture — sometimes without us realizing it.”
Attachment

Matthew Aliberti MIT Sloan School of Management 7815583436 malib@mit.edu