
With 2.1 billion monthly users, YouTube is a major media platform that has become an important component of many Americans’ news diets. Simultaneously, it has garnered a reputation for stoking the flames of political extremism, making it a focal point among researchers studying video-streaming platforms and their intersection with political polarization. The mainstream media has also taken interest in extremism on YouTube—New York Times writer and Princeton professor Zeynep Tufekci has argued that YouTube’s recommendation algorithm radicalizes its users by exposing them to increasingly polarizing content.
A substantial body of literature has sought to examine this phenomenon, but with mixed results; some researchers claim that filter bubbles (echo chambers of similar content) and rabbit holes (sequences of videos that present more extreme content over time) play a role in influencing users’ political beliefs by shaping their media diet. Other works (including from Homa Hosseinmardi at the CSSLab) dispute the influence of such filter bubbles. And still others focus on the downstream effects of algorithmic polarization, examining methods to mitigate its influence under the assumption that it exists.
However, it is challenging to quantify the causal effects of experiencing a slanted (biased) recommendation algorithm; first, the algorithm’s recommendations are confounded with user choice, so it is unclear whether a politically homogeneous media diet should be attributed more to the algorithm or more to users’ own preferences. Second, it is difficult to directly test changes to recommendation algorithms; YouTube is a proprietary, black box system, and its underlying processes are unknown to researchers.
To better disentangle the effects of the algorithm from the effects of user choice, researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania created a YouTube-like interface, where participants watched videos and interacted with the platform. The platform was fully instrumented, allowing researchers to observe participants’ viewing behaviors (such as watch time, likes, shares, and saves). Finally, after watching videos on the platform for an average of 23 minutes, participants completed surveys that quantified the extent to which videos shifted their views.
By running experiments on this custom-built interface, an interdisciplinary team of researchers including Wharton PhD student Emily Hu and professor Dean Knox—along with Naijia Liu, Yasemin Savas, and others from Harvard, MIT, Princeton, and WashU—found that algorithmic recommendations had little impact on one’s political beliefs and behaviors in their new paper, Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube, which was published today in the Proceedings of the National Academy of Sciences (PNAS).
To carry out this study, the researchers designed four experiments, involving about 9,000 participants, in which the participants were randomized into one of two different styles of recommendation algorithms: “slanted” or “balanced.” In the slanted condition, participants were recommended more content that aligned with the political ideology of the most recently watched video, whereas in the balanced conditions, participants were recommended equal amounts of content, which either aligned or opposed the ideology of the most recently watched video. These researchers then evaluated the effect of the algorithm by comparing the viewer’s choices and their post-viewing opinions across these two conditions.
The video content focused on two policy issues: gun control and minimum wage; in total, the experiments generated over 130,000 recommendations (generated by the researchers’ experimentally manipulated video recommendations, which were designed to reflect real recommendations scraped from YouTube).
Studies 1–3 tested the filter bubble effect by having participants begin with an initial (“seed”) video (liberals were given a liberal seed video, conservatives were given a conservative seed video, and moderates were randomly assigned a liberal or conservative seed video) and then receiving either slanted or balanced recommendations, giving them the freedom to choose what videos to watch next. Study 4 simulated the rabbit hole effect by showing a fixed sequence of “extremizing” recommendations to participants.
Across all experiments, the researchers observed limited effects from the recommendation algorithms on either participants’ viewing behavior or political views, with the exception of conservatives moving slightly rightward. However, regardless of the recommendation algorithm, participants tended to choose more videos that aligned with their existing political beliefs; in addition, the rabbit holes were found to not be extremizing. These findings contest the notion that filter bubbles and rabbit holes contribute to algorithmic polarization.
In short, while people can still experience short-term radicalization effects, they usually stick to their own viewing habits, with the exception of a small group of people who actively seek out extreme content on video platforms like YouTube.
Although there are multiple studies on YouTube’s algorithms, Hu, Knox, and colleagues’ work stands out for its realism, using human viewers and real videos on a fully instrumented YouTube-like platform. Additionally, they gave participants more agency during the study and allowed them to interact naturally with the platform, which more closely mirrors how people engage with recommendation algorithms in the real world.
While it’s challenging to definitively determine whether polarization is more the result of user choice or algorithmic recommendations, the findings from this study show that — at least within the issues and setting studied — the oft-blamed filter bubbles and rabbit holes may not be as influential as some have asserted.
Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube was published in Proceedings of the National Academy of the Sciences (PNAS).
AUTHORS
DELPHINE GARDINER
–
Communications Specialist