Toxic Dominant Culture Maintains Norms of Oppression
DeepMind (a Google thinktank) identified “discrimination, exclusion, and toxicity” as the first of a very long list of harms from LLMs. The same report also identified harms arising from the following malicious uses of LLMs: “making disinformation cheaper and more effective; facilitating fraud, scams and more targeted manipulation; assisting code generation for cyber attacks, weapons, or malicious use; [and] illegitimate surveillance and censorship.” They also identified human-computer interaction harms of “creating avenues for exploiting user trust” and “promoting harmful stereotypes by implying gender or ethnic identity.” (Weidinger et al, 2021)
LLMs also pose the harm of a (further) move away from a feminist worldview of relationality, reproducing patriarchal hegemony through our day-to-day digital norms.
Read more:
- Dialect prejudice predicts AI decisions about people’s character, employability, and criminality
- Enchanted Determinism: Power without Responsibility in Artificial Intelligence
- Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines
- Sam Altman says ‘potentially scary’ AI is on the horizon. This is what keeps AI experts up at night
- Atlas of AI
- Ethical and social risks of harm from Language Models

Impacts
- Social Norm + Knowledge Reproduction