A new study from the Computational Social Science Lab shows that while online misinformation exists, it isn’t as pervasive as pundits and the press suggest.

Sam Wolken

In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. 

Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. 

Originally published in
Annenberg Research

by Hailey Reissman
June 28, 2024

Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Stevens University Professor Duncan Watts, study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research (Wharton Ph.D. ‘11 and PI in the CSSLab), Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College, and Annenberg alumnus Emily Thorson (Ph.D. ’13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite. 

A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but it isn’t supported by empirical evidence, the researchers say.

“The research shows that only a small fraction of people are exposed to false and radical content online,” says Rothschild, “and that it’s personal preferences, not algorithms that lead people to this content. The people who are exposed to false and radical content are those who seek it out.”

Misleading Statistics

Articles debating the pros and cons of social media platforms often use eye-catching statistics to claim that these platforms expose Americans to extraordinary amounts of false and extremist content, and subsequently cause societal harm, from polarization to political violence.

However, these statistics are usually presented without context, the researchers say.

For example, in 2017, Facebook reported that content made by Russian trolls from the Internet Research Agency reached as many as 126 million U.S. citizens on the platform before the 2016 presidential election. This number sounds substantial, but in reality, this content accounted for only about 0.004% of what U.S. citizens saw in their Facebook news feeds.

“It’s true that even if misinformation is rare, its impact is large,” Rothschild says. “But we don’t want people to jump to larger conclusions than what the data seems to indicate. Citing these absolute numbers may contribute to misunderstandings about how much of the content on social media is misinformation.”

Algorithms vs. Demand

Another popular narrative in discourse about social media is that platforms’ recommendation algorithms push harmful content onto users who wouldn’t otherwise seek out this type of content.

But researchers have found that recommendation algorithms tend to push users toward more moderate content and that exposure to problematic content is heavily concentrated among a small minority of people who already have extreme views.

“It’s easy to assume that algorithms are the key culprit in amplifying fake news or extremist content,” says Rothschild, “but when we looked at the research, we saw time and time again that algorithms reflect demand and that demand appears to be a bigger issue than algorithms. Algorithms are designed to keep things as simple and safe as possible.”

Social Harms 

There has been a recent trend of articles suggesting exposure to false content or extremist content on social media is the cause of major societal ills, from polarization to political violence. 

“Social media is still relatively new and it’s easy to correlate social media usage levels with negative social trends of the past two decades,” Rothschild says, “but empirical evidence does not show that social media is to blame for political incivility or polarization.”

Improving Public Discourse About Social Media 

The researchers stress that social media is a complex, understudied communication tool and that there is still a lot to learn about its role in society.

“Social media use can be harmful and that is something that needs to be further studied,” Rothschild says. “If we want to understand the true impact of social media on everyday life, we need more data and cooperation from social media platforms.”

To encourage better discourse about social media, the researchers offer four recommendations: measure exposure and mobilization among extremist fringes; reduce demand for false and extremist content and amplification of it by the media and political elites; increase transparency and conduct experiments to identify causal relationships and mitigate harms; and fund and engage research around the world:

 

 

“Misunderstanding the harms of online misinformation” was published in Nature and authored by Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson, and Duncan J. Watts.

AUTHORS

Hailey Reissman

Annenberg News