"

Biased Data Reproduces & Amplifies Bias, Exclusion

Over time, LLMs will rewrite and/or silence histories, producing the erasures of cultures outside the dominant-biased datasets. This will amplify ableist, gendered, genocidal, racist, and classist harms in society, particularly for knowledge reproduction. In this way, LLMs can be seen as a recolonizing pathway, with technology increasingly turned to as a key feature of societal designs and decision-making that will reproduce Western-biased ontologies and epistemologies.

 

Read more: 

Harms: Ableist; Ecocidal; Gendered; Genocidal; Racist; Socioeconomic

Impacts

  • Social Norm + Knowledge Reproduction

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Harm Considerations of Large Language Models (LLMs) Copyright © by Teaching and Learning, University Libraries is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.