Similar Posts
The Morality Machine — Episode 2: Can Large Language Models Tell the Truth?
In this episode, we test whether large language models (LLMs) can truly be trusted to provide accurate sources. By comparing responses from GPT-oss-20B and DeepSeek to a politically charged research question, we uncover a pattern of fabricated information in AI models.
Should you jailbreak ChatGPT with Psychology?
The same psychological tricks that persuade humans can also manipulate AI. By appealing to authority, emotion, or even imagined stakes, users have convinced models like ChatGPT to ignore their own safety rules. This raises several ethical questions: when is probing these limits a valuable act of research, and when is it reckless?
The Morality Machine — Episode 3: The Ethics of LLM Poisoning
In this episode of The Morality Machine, we discuss the hidden dangers of LLM poisoning. We uncover how bad actors, propaganda networks, and artists are manipulating AI systems.
When the Algorithm Gets You Wrong: The Right to Be an Exception
As algorithms increasingly make decisions about who gets jobs, loans, or parole, what happens to the people who don’t fit the pattern? This post explores The Right to Be an Exception to a Data-Driven Rule, a case study that examines how data-driven systems can unfairly disadvantage those who fall outside statistical averages.