Annenberg PhD student Neil Fasching and Postdoctoral researcher Jennifer Allen of the Computational Social Science Lab (CSSLab) recently presented their research on April 15th at the 2nd annual Center for Information Network Democracy Workshop: The Impact of AI on (Mis)Information. CIND is an initiative led by Annenberg faculty members Sandra González-Bailón and Yphtach Lelkes, focused on the impact of information ecosystems on democratic participation in the digital age.

Neil Fasching: “Model-Dependent Moderation: Inconsistencies in Hate Speech Detection Across LLM-based Systems”

The motivation for Fasching’s research stems from how “AI is increasingly being used for content moderation of the media landscape,” including hate speech detection. As hate speech cannot be directly regulated by the government, the responsibility of moderation falls on social media and AI companies. However, these moderation systems have been found to be biased, so Fasching evaluated several of these systems and their impact on the groups of people they are designed to protect.

Neil Fasching

Fasching created a synthetic dataset containing over 1.3 million different phrases, which he used to carry out a comparative analysis to evaluate the performance of various moderation systems, including Moderation Endpoints (OpenAI), frontier LLMs (ie. ChatGPT), and traditional machine learning methods.

 His results showed that different models had varying classifications of hate speech, especially when it came to hate speech targeting specific groups, highlighting that these models displayed different boundaries around harmful content. To broaden the scope of this study, Fasching plans on replicating these methods using real data, including X and podcast transcripts and examining other content moderation topics beyond hate speech.

Jennifer Allen: “Quantifying the impact of misinformation and vaccine-skeptical content on Facebook”

 Allen presented her most recent publication, Quantifying the impact of misinformation and vaccine-skeptical content on Facebook. Misinformation has negative societal challenges, however, just as concerning but less studied and scrutinized is unflagged vaccine-skeptical content, which is factually accurate but nonetheless misleading. To address this gap in research, Allen developed a framework to predict the impact of misinformation and vaccine-skeptical content on Americans’ vaccination behaviors.

Using crowdsourcing and machine learning to analyze anti-vaccine headlines in the first three months of the COVID vaccine roll-out (in 2021), Allen found that unflagged vaccine-skeptical content has a greater impact (by 46 times) than misinformation on decreasing the likelihood of getting vaccinated.

September 1, 2022 -- MIT Sloan School of Management. Photo by Caitlin Cunningham Photography LLC.

Jennifer Allen, Ph.D.

Misinformation appears to be more harmful, but it had a much lower reach compared to vaccine-skeptical content, whose prevalence was attributed to two factors: persuasiveness and exposure. These findings highlight the need for more research to facilitate the development of content moderation methods that protect users from harmful content while upholding freedom of speech.

Jennifer Allen’s paper was published in Science last Summer and has since gained extensive media attention, with coverage from outlets including The Guardian, Los Angeles Times, and Scientific American

At CIND, both Fasching and Allen discussed the role that AI plays in the public discourse and information we encounter everyday, whether it is in the news or on social media. Allen’s research shows that AI can be a valuable tool in measuring the impact of news content on user behavior, while Fasching’s work demonstrates that though AI can be used for good, current models are biased and inconsistent in the ways they classify hate speech, which could potentially amplify harmful content on platforms. Their findings demonstrate that AI doesn’t just influence what content gets pushed into the media ecosystem, but also serves as a tool for mitigating the harmful effects of bias in the media.

AUTHORS

Delphine Gardiner

Senior Communications Specialist