Regulating Content Recommendation Algorithms
Social media companies claim little to no responsibility for the content that they host. However, framing regulation only on what should or should not be allowed ignores the fact that these companies are responsible for amplifying or suppressing content, regardless of whether it is technically visible. In practice, visibility determines impact. As a result, platforms have a significant influence over public discourse but aren’t held responsible.
Why is Regulation Important?
As AI-generated content becomes increasingly cheap to produce and difficult to distinguish from content created by humans, the role of algorithms in amplifying content becomes more important. Algorithms optimize for engagement, pushing emotionally charged, surprising, or repetitive material. Low quality content and misinformation can spread rapidly because it causes argument or emotional response.
Influence on Human Behavior
In 2012, Facebook conducted an experiment on ’emotional contagion’ where they manipulated some people’s Facebook feeds by including either more positive or negative content. They found that they were able to influence user’s moods by changing the content they saw without them even knowing.
The Filter Bubble Effect
Social media is basically a large social network where information and opinions are shared. However, most people are unaware of the algorithmic decisions guiding what they see. This leads to an effect called the ‘filter bubble.’ When you like something on social media, it gives them an idea of what content you agree with. To maximize engagement, many algorithms prioritize showing you more content that you agree with. Because of this, you miss out on other opinions and perspectives.
Most people believe that what they see is representative of the world, but when you are shown a filtered version of the world, it distorts your perception. It becomes difficult for people to talk to those who aren’t like them and make educated decisions. This can be especially dangerous when it comes to politics.
What Could be Regulated?
User Control Over the Algorithm
Currently, users have little to no control over how content is shown to them. Regulations could require social media companies to provide chronological feed options or the ability to customize what content they are shown.
Ranking Criteria
Social media platforms rarely disclose how their algorithms prioritize content. While the companies argue their algorithms are industry secrets, regulation could require a higher level of disclosure. Providing users information on whether engagement or emotional sentiment is prioritized, how AI-generated content is treated in comparison to human generated content, and whether certain types of content are promoted or hidden would allow them to better understand how social media shapes their attention.
AI-Generated Content
As AI-generated content becomes more prevalent, regulation could require clear labeling of AI content, limiting of the promotion of AI content, or disclosure that the recommendation algorithm is based on machine learning.