Computational Social Science
Lab at Penn

CSS unites computer science, statistics, and social science to solve challenging real-world problems using digital data and platforms. Through mass collaboration with industry, government, and civil society, our research generates insights that advance basic science in the service of practical applications.

Computational Social Science Lab at Penn

CSS unites computer science, statistics, and social science to solve challenging real-world problems using novel theories. Through our mass collaboration with industry, government, and civil society, our path-breaking research generates solutions which are both innovative and practical.

News

Duncan Watts and CSSLab’s New Media Bias Detector
Duncan Watts and CSSLab’s New Media Bias Detector

The 2024 U.S. presidential debates kicked off June 27, with President Joe Biden and former President Donald Trump sharing the stage for the first time in four years. Duncan Watts, a computational social scientist from the University of Pennsylvania, considers this an ideal moment to test a tool his lab has been developing during the last six months: the Media Bias Detector.

“The debates offer a real-time, high-stakes environment to observe and analyze how media outlets present and potentially skew the same event,” says Watts, a Penn Integrates Knowledge Professor with appointments in the Annenberg School for Communication, School of Engineering and Applied Science, and Wharton School. “We wanted to equip regular people with a powerful, useful resource to better understand how major events, like this election, are being reported on.”

What Public Discourse Gets Wrong About Misinformation Online
What Public Discourse Gets Wrong About Misinformation Online

Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Stevens University Professor Duncan Watts, study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research (Wharton Ph.D. ‘11 and PI in the CSSLab), Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College, and Annenberg alumnus Emily Thorson (Ph.D. ’13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite.

Mapping Media Bias: How AI Powers the Computational Social Science Lab’s Media Bias Detector
Mapping Media Bias: How AI Powers the Computational Social Science Lab’s Media Bias Detector

Every day, American news outlets collectively publish thousands of articles. In 2016, according to The Atlantic, The Washington Post published 500 pieces of content per day; The New York Times and The Wall Street Journal more than 200. “We’re all consumers of the media,” says Duncan Watts, Stevens University Professor in Computer and Information Science. “We’re all influenced by what we consume there, and by what we do not consume there.”

Over the past 100 years, social science has generated a tremendous number of theories on the topics of individual and collective human behaviour. However, it has been much less successful at reconciling the innumerable inconsistencies and contradictions among these competing explanations.

Duncan Watts
CSSLab Founder

Over the past 100 years, social science has generated a tremendous number of theories on the topics of individual and collective human behaviour. However, it has been much less successful at reconciling the innumerable inconsistencies and contradictions among these competing explanations.

Duncan Watts
CSS Lab Founder

Researchers & Staff

Meet the CSSLab