The Morality Machine — Episode 3: The Ethics of LLM Poisoning
Welcome to the Morality Machine, a podcast created by Elise Hachfeld and Daniel Evans for our Computing Ethics class.
In this episode, we explore the dark side of large language models (LLMs.) Any time a new technology is introduced, a race begins to hack it. It’s no different for AI. Both bad actors and activists have discovered how to ‘poison’ LLMs to impact their performance.
Overview
First, we examine one potential dark side of AI poisoning: Russia’s use of misinformation and disinformation campaigns and the Pravda network. Next, we discuss the software Glaze and Nightshade, which is a positive example of how AI poisoning can be used to protect intellectual property.

Listen to the full episode below:
Sources
- Russia seeds chatbots with lies. Any bad actor could game AI the same way.
- Russia-linked Pravda network cited on Wikipedia, LLMs, and X
- This tool strips away anti-AI protections from digital art
- The AI lab waging a guerrilla war over exploitative AI
- A small number of samples can poison LLMs of any size