AI isn’t a Killing Machine: Debunking the Fear Around Artificial Intelligence
The Axios article “How AI can kill you” provides a pessimistic and unrealistic portrayal of artificial intelligence. It cites a few alarming cases in which AI models have been implicated in harming users and generalizes this to the beginning of an AI apocalypse. While there are many valid critiques of AI, and these stories deserve attention, the article relies heavily on fear mongering. It portrays AI as a conscious being that is acting maliciously, rather than an issue of inadequate safeguards and biased training data.
Breaking Down the Argument
- P1: AI models have lied, manipulated, blackmailed users, or given instructions for dangerous actions.
- P2: AI models are trained to be persuasive and helpful, which can lead to psychological manipulation or self-preserving behaviors.
- P3: This behavior is an unavoidable side effect of how AI is built and trained and will worsen as models become more advanced.
- Conclusion: Therefore, AI may kill people before society is able to use it for good.
Rebuttal
The article assumes that rare cases of AI misuse will inevitably lead to catastrophe. This slippery slope argument ignores the large majority of interactions with AI that are safe and even useful. Additionally, tech companies are constantly trying to make their systems safer in response to these incidents. A few tragic stories do not prove AI is going to kill us all.
The article utilizes fear mongering from the first paragraph, saying robots may kill us all. The language is dramatic: citing “profound dangers” or “Remember, it’s read Machiavelli’s “‘The Prince.'” Some emotional language is useful to help people engage with the real harms that are possible with AI, but there is no evidence provided that there is any malicious intent behind these models. She claims that there is no real fix, which fails to educate people on the potential solutions researchers and developers are already coming up with. AI emulates human behavior patterns from its training data, but this can be mitigated with content filters.
Most interactions with AI are safe and beneficial. OpenAI says they receive 2.5 billion prompts from users every single day. Considering the amount of users, the proportion of users who have reported the behavior cited by this article is relatively small. Obviously, the ideal amount of harm done by AI is none. However, that can’t be achieved without discussing the actual cause of intended behavior and suggesting regulations to reduce it. Instead of abandoning AI altogether, we should educate users about how to interact with AI safely, and continue to add safeguards to models.
Why I Chose this Article
This article struck me because it was included in the AI Ethics and Policy News resource created by Professor Casey Fiesler. The majority of the articles I read in this resource had fairly sound arguments, but this one stood out because of its title. Initially, I wasn’t going to analyze this article, but then a family member who knows I am taking this class sent me this exact article because it was in their Apple newsfeed. I realized that although the ‘argument’ seems absolutely baseless when considering the real causes behind these incidents (poor model design, bias in data) if you are less familiar with AI, the emotional language and specific stories mentioned can be convincing.