| | |

Social Media Algorithms are Feeding You AI Slop

Abstract

Nearly everything you see on the internet is tailored specifically to you — from Google search to TikTok’s For You Page. Social media companies use a combination of content recommendation algorithms and your personal data to curate your feed. They gather everything available about you: what content you like, dislike, and how much time you spend on it to guess what kind of person you are. In some cases these algorithms are a ‘black box’ that the companies and developers themselves can’t even explain. It’s perfectly legal, but could it be eroding our ability to recognize what is real, or even disrupting the democratic process itself?

Learning Objectives

  • Understand the basics of collaborative filtering.
  • Assess the ethical concerns regarding content recommendation systems.
  • Recognize the hallmarks of AI generated content.

How Content Recommendation Works

Social media platforms like YouTube, TikTok, Instagram, or Facebook, as well as streaming services like Netflix or Hulu rely on content recommendation algorithms to suggest content to users. The specific details of how content recommendation algorithms at different social media companies work are usually kept as ‘industry secrets,’ but we will explore the main theory underlying this technology.

Collaborative Filtering

Collaborative filtering is a technique used to recommend content based on what you (and other users) have liked in the past. The first type is user-based collaborative filtering:

User 1

  • Likes Spiderman
  • Likes Dune
  • Dislikes The Notebook

User 2

  • Likes The Barbie Movie
  • Likes Ratatouille
  • Dislikes Joker

User 3

  • Likes Dune
  • Likes Joker
  • ???

In this case, we might recommend Spiderman to User 3 because they are similar to User 1. This type of filtering finds patterns between users. Even if User 3 has not watched movies similar to Spiderman before, it is recommended to them because users with similar likes and dislikes have liked it.

There is also item-based collaborative filtering, which recommends things that are similar to what you have engaged with in the past. For example, if you liked several posts with the tag ‘art,’ you might be shown more posts with that tag.

Details of the Algorithm

The foundation of collaborative filtering is an algorithm called K-Nearest Neighbors (KNN.) This type of algorithm can be used for both classification problems (predicting a category) or regression problems (predicting a numerical output.)

A KNN model where K = 3 would predict observations based on the 3 observations closest to the one you are trying to predict. For example if you are predicting class based on sepal width and length, you can predict a new observation based on the K observations closest to it in the training data.

Credit: Scikit Learn Documentation

Ethical Concerns

Collaborative filtering is a very effective method of recommending content, but there are also some concerning side-effects of the process.

If you find it difficult to get into a new social media platform, but once you start you are addicted, this may be the reason why. When you are a new user, there is little to no data available to compare you to other users for user-based recommendation.

It is also one of the reasons that posts go ‘viral.’ If you post something to social media and nobody likes it, the algorithm doesn’t know what kind of people might like it because it has nothing to compare to. Posts that start with low-engagement are unlikely to be viewed by many people.

Facebook’s AI Slop Problem

In recent years, Facebook has increasingly begun to recommend low-quality, AI-generated content, known as ‘AI slop.’ This content is not always deliberate misinformation, but it is intended to exploit the content recommendation system for financial profit.

AI slop can be recognized by looking for impersonal, repetitive content. There is not one simple trick you can use to identify AI slop. However, there are some signs to look out for that can be indicative of AI generated content.

Recognizing AI-Generated Text

  • Excessive amount of buzzwords like unleash, empower, tailor, or seamless
  • Excessive use of emojis
  • Sentences that don’t vary in structure
  • Provides a vague description of ideas/events with no detail or citations
  • Hallucinated information (things that are verifiably false)

Recognizing AI-Generated Photos and Videos

  • Extra fingers, toes, or teeth
  • Unrealistic lighting or shadows
  • Excessive detail or lack of detail
  • Inconsistent textures
  • Uncanny facial expressions and movement
  • Objects changing shape or color

Learn More

Discussion Questions

  • What are the strengths and weaknesses of different algorithms used for content recommendation?
  • What kinds of information about a user are most valuable to content recommendation algorithms. Why?
  • Do you think algorithmic recommendation improves or limits your understanding of the world?
  • Should platforms be required to flag AI-generated content?
  • Who should be held responsible when algorithms amplify misinformation?
  • What aspects of content recommendation systems should be regulated, if any?

Sources

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *