Project Ratio

“Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Particularly, since the 2016 US presidential election, the deliberate spread of misinformation on social media has generated extraordinary concern, in large part because of its potential effects on public opinion, political polarization, and ultimately democratic decision making. Inspired by “solution-oriented research”, the project Ratio aims to foster a news ecosystem and culture that values and promotes authenticity and truth.

However, proper understanding of misinformation and its effects requires a much broader view of the problem, encompassing biased and misleading–but not necessarily factually incorrect–information that is routinely produced or amplified by mainstream news organizations. Much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. Project Ratio measures the origins, nature, and prevalence of misinformation, broadly construed, as well as its impact on democracy. We strive for objective and credible information, providing a first-of-its-kind at scale, real-time, cross-platform mapping of news content, as it moves through the “information funnel,” from news production, through distribution and discovery, consumption, and absorption.

%

Before the 2016 Election

%

After the 2016 election

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam tincidunt nunc ut hendrerit volutpat. Duis auctor scelerisque neque, eget condimentum enim semper vitae.

KEY RESEARCHERS

Duncan Watts

Stevens University Professor & twenty-third Penn Integrates Knowledge Professor

David Rothschild

Research Scientist @ Microsoft

Homa Hosseinmardi

Research Scientist

PUBLICATIONS

Rebuilding legitimacy in a post-truth age

Duncan J. Watts and David Rothschild.

The current state of public and political discourse is in disarray. Outright fake news stories circulate on social media. The result has been a called a post-truth age, in which evidence, scientific understanding, or even just logical consistency have become increasingly irrelevant to political argumentation.

Don’t blame the election on fake news. Blame it on the media.

Duncan J. Watts and David Rothschild.

Since the 2016 presidential election, an increasingly familiar narrative has emerged concerning the unexpected victory of Donald Trump. Fake news, was amplified on social networks. We believe that the volume of reporting around fake news, and the role of tech companies in disseminating those falsehoods, is both disproportionate to its likely influence in the outcome of the election and diverts attention from the culpability of the mainstream media itself.

The science of fake news

David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts and Jonathan L. Zittrain.

The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. We discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads.

science_icon

data_iconDATA

DATA OVERVIEW

The burgeoning and rise of big data results in salience of the quantity of data, nourishing the soil for qualitative research and analysis, addressing social, economic, cultural and ethical implications and issues of social science. Converging computer science and social science, the project Ratio suggests use-inspired intellectual research style and data-driven methodological directions for computational social science, yielding a diversity of perspectives on explanation, understanding, and prediction of information flow and impact. Collaborating with various data providers, currently including Nielsen, PeakMetric, TVEyes and Harmony Labs, we seek to establish a large-scale data infrastructure for studying the production, distribution, consumption, absorption in the information ecosystem, illuminating each aspect of research on “fake news” in-depth and in-width.

Homa Hosseinmardi and Sam Wolken Speak at Annenberg Workshop

Homa Hosseinmardi and Sam Wolken Speak at Annenberg Workshop

Homa Hosseinmardi and Sam Wolken of the Computational Social Science Lab (CSSLab) were recently invited to speak at the Political and Information Networks Workshop on April 25-26. This workshop was organized by the Center for Information Networks and Democracy (CIND), a new lab under the Annenberg School of Communication. CIND studies how communication networks in the digital era play a role in democratic processes, and its research areas include Information Ecosystems and Political Segregation (or Partisan Segregation).

Joe Biden’s (but not Donald Trump’s) age: A case study in the New York Times’ inconsistent narrative selection and framing

Joe Biden’s (but not Donald Trump’s) age: A case study in the New York Times’ inconsistent narrative selection and framing

On the weekend of March 2-3, 2024, the landing page of the New York Times was dominated by coverage of their poll showing voter concern over President Biden’s age. There was a lot of concern among Democrats about the methods of the poll, especially around the low response rate and leading questions. But as a team of researchers who study both survey methods and mainstream media, we are not surprised that people are telling pollsters they are worried about Biden’s age. Why wouldn’t they? The mainstream media has been telling them to be worried about precisely this issue for months.

Hyperpartisan consumption on YouTube is shaped more by user preferences than the algorithm

Hyperpartisan consumption on YouTube is shaped more by user preferences than the algorithm

Given the sheer amount of content produced every day on a platform as large as YouTube, which hosts over 14 billion videos, the need for some sort of algorithmic curation is inevitable. As YouTube has attracted millions of views on partisan videos of a conspiratorial or radical nature, observers speculate that the platform’s algorithm unintentionally radicalizes its users by recommending hyperpartisan content based on their viewing history.

But is the algorithm the primary force driving these consumption patterns, or is something else at play?

The YouTube Algorithm Isn’t Radicalizing People

The YouTube Algorithm Isn’t Radicalizing People

About a quarter of Americans get their news on YouTube. With its billions of users and hours upon hours of content, YouTube is one the largest online media platforms in the world.

In recent years, there has been a popular narrative in the media that videos from highly partisan, conspiracy theory-driven YouTube channels radicalize young Americans and that YouTube’s recommendation algorithm leads users down a path of increasingly radical content.

New Insights on Common Sense Take the Spotlight on Canadian Radio

New Insights on Common Sense Take the Spotlight on Canadian Radio

Mark E. Whiting was featured on Quirks and Quarks, a science and technology podcast on CBC (Canadian Broadcasting Corporation) radio. The host, Bob McDonald, is a renowned Canadian science journalist who interviewed Whiting on his recent milestone. Their conversation, “Common sense is not that common, but it is widely distributed,” was aired on January 19, 2024.

The commonalities of common sense

The commonalities of common sense

Throughout human history, survival and the formation of complex societies have heavily depended on knowledge. Equally crucial are the assumptions about what others perceive as true or false, namely common sense. This is evident in everyday situations like adhering to road rules: Pedestrians naturally avoid walking into traffic, while drivers refrain from driving on sidewalks to bypass congestion.

Commonsensicality: A Novel Approach to Thinking about Common Sense and Measuring it

In general we believe that we possess common sense to a certain extent, but have you ever wondered if what you perceive to be common sense is also considered common sense to others?
In other words, is common sense actually common?

The answer remains elusive in large part due to a lack of empirical evidence. To address this problem, CSSLab Senior Computational Social Scientist Mark E. Whiting and CSSLab Founder and Director Duncan J. Watts introduce an analytical framework for quantifying common sense in their paper titled: “A framework for quantifying individual and collective common sense.”

Warped Front Pages

Warped Front Pages

Seven years ago, in the wake of the 2016 presidential election, media analysts rushed to explain Donald Trump’s victory. Misinformation was to blame, the theory went, fueled by Russian agents and carried on social networks. But as researchers, we wondered if fascination and fear over “fake news” had led people to underestimate the influence of traditional journalism outlets. After all, mainstream news organizations remain an important part of the media ecosystem—they’re widely read and watched; they help set the agenda, including on social networks.

Mapping the Murky Waters: The Promise of Integrative Experiment Design

My PhD journey began with a clear vision: to unravel the interplay between social network structures and their collective outcomes. I was particularly interested in the collective intelligence arising in those structures. With several projects already underway on this topic, I felt prepared. Perhaps optimistically, or some might think naively, I chose to tackle the literature review of my dissertation —often considered the “easy part”— during the first year of my PhD. always been interested in how people think, something that drew him to study literature as an undergraduate, and, now, to investigate the intersection between public opinion, local news, and politics.