"

Ongoing Discrimination, Exclusion, Harms

As LLMs reproduce toxic dominant cultural norms, the effect will be compounding over time, perpetuating ableist, ecocidal, gendered, genocidal, racist, and socioeconomic harms as biased datasets increasingly reaffirm themselves.

Although LLM products may try to limit harmful feedback from being produced by their datasets/training, malevolent or nefarious uses (and somtimes even benign uses) are still able to craft workarounds that acheive harmful outputs. Given polarized political climates, the vulnerability of democratic institutions due to social media, and other toxic norms already present, the potential nefarious uses of LLMs pose cause for concern.

The real-world impact of reaffirming toxic dominant cultural norms will be seen in public policy, education, employment, housing, dating, and most areas of life where relationships can be mediated through technology. The divides that exist due to the dominant paradigms of settler colonization, capitalism, globalization, racism, and patriarchy will widen.

If and/or how we choose to mitigate LLM harms will speak volumes about our cultural ethics. Will we continue adopt authoritarian modes of control through a culture of surveillance, policing, and enforcement, using power and hierarchy to suppress, legislate, and regulate? Will we prioritize capitalist realism over human/environmental wellbeing and let economic issues drive the course of action, deferring ethical responsibility to the fallacy of the invisible hand regulating marketplace decisions?

 

Read more: 

Harms: Ableist; Ecocidal; Gendered; Genocidal; Racist; Socioeconomic

Impacts

  • Social Norm + Knowledge Reproduction

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Harm Considerations of Large Language Models (LLMs) Copyright © by Teaching and Learning, University Libraries is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.