Biased Data Reproduces & Amplifies Bias, Exclusion
Over time, LLMs will rewrite and/or silence histories, producing the erasures of cultures outside the dominant-biased datasets. This will amplify ableist, gendered, genocidal, racist, and classist harms in society, particularly for knowledge reproduction. In this way, LLMs can be seen as a recolonizing pathway, with technology increasingly turned to as a key feature of societal designs and decision-making that will reproduce Western-biased ontologies and epistemologies.
Read more:
- New Report Highlights Urgent Need for Addressing Gender Bias in AI Systems
- Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines
- ChatGPT doesn’t know where the world’s copper comes from, AI images show mining stuck in the Great Depression
- The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques
- Atlas of AI
- Ethical and social risks of harm from Language Models

Impacts
- Social Norm + Knowledge Reproduction