Blog

  • When Deleted Data is Recoverable

    When Deleted Data is Recoverable

    Does clicking delete really do what we think? In many cases, no. In his case study “Complete Delete,” Simson Garfinkel explores the phenomenon of remnant data on different systems like backups, caches and old devices. Modern devices prioritize accessibility at the cost of privacy.

    Continue Reading

  • Electronic Overconsumption and Green Colonialism

    Electronic Overconsumption and Green Colonialism

    Many critics of AI cite environmental concerns like water and land usage as justification for restricting its use. Environmental concerns are not unique to AI though, modern technology like laptops and electric cars use numerous minerals and often aren’t recyclable. Is this a fault of technology or a symptom of a societal issue?

    Continue Reading

  • Exploring the Intricacies of Generative AI

    Exploring the Intricacies of Generative AI

    ChatGPT is a Large Language Model (LLM) that generates text by predicting words. It has revolutionized the field of AI, but many have concerns about intellectual property, accuracy of information, and its environmental impact.

    Continue Reading

  • When the Algorithm Gets You Wrong: The Right to Be an Exception

    When the Algorithm Gets You Wrong: The Right to Be an Exception

    As algorithms increasingly make decisions about who gets jobs, loans, or parole, what happens to the people who don’t fit the pattern? This post explores The Right to Be an Exception to a Data-Driven Rule, a case study that examines how data-driven systems can unfairly disadvantage those who fall outside statistical averages.

    Continue Reading

  • AI isn’t a Killing Machine: Debunking the Fear Around Artificial Intelligence

    AI isn’t a Killing Machine: Debunking the Fear Around Artificial Intelligence

    Despite headlines warning that “AI can kill you,” most interactions with artificial intelligence are safe and beneficial. This article analyzes how fear-based reporting exaggerates AI risks and argues for thoughtful regulation and public education instead of panic.

    Continue Reading

  • Should you Jailbreak ChatGPT with Psychology?

    Should you Jailbreak ChatGPT with Psychology?

    The same psychological tricks that persuade humans can also manipulate AI. By appealing to authority, emotion, or even imagined stakes, users have convinced models like ChatGPT to ignore their own safety rules. This raises several ethical questions: when is probing these limits a valuable act of research, and when is it reckless?

    Continue Reading