"

Toxic Dominant Culture Maintains Norms of Oppression

DeepMind (a Google thinktank) identified “discrimination, exclusion, and toxicity” as the first of a very long list of harms from LLMs. The same report also identified harms arising from the following malicious uses of LLMs: “making disinformation cheaper and more effective; facilitating fraud, scams and more targeted manipulation; assisting code generation for cyber attacks, weapons, or malicious use; [and] illegitimate surveillance and censorship.” They also identified human-computer interaction harms of “creating avenues for exploiting user trust” and “promoting harmful stereotypes by implying gender or ethnic identity.” (Weidinger et al, 2021)

LLMs also pose the harm of a (further) move away from a feminist worldview of relationality, reproducing patriarchal hegemony through our day-to-day digital norms.

 

Read more: 

Harms: Ableist; Ecocidal; Gendered; Genocidal; Racist; Socioeconomic

Impacts

  • Social Norm + Knowledge Reproduction

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Harm Considerations of Large Language Models (LLMs) Copyright © by Teaching and Learning, University Libraries is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.