Research

Penn Media Accountability Project (PennMAP)

PennMAP is building technology to detect patterns of bias and misinformation in media from across the political spectrum and spanning television, ratio, social media, and the broader web. We will also track consumption of information via television, desktop computers, and mobile devices, as well as its effects on individual and collective beliefs and understanding. In collaboration with our data partners, we are also building a scaleable data infrastructure to ingest, process, and analyze tens of terabytes of television, radio, and web content, as well as representative panels of roughly 100,000 media consumers over several years.

ben franklin w/mask

COVID – Philadelphia

Our team is building a collection of interactive data dashboards that visually summarize human mobility patterns over time and space for a collection of cities, starting with Philadelphia, along with highlighting potentially relevant demographic correlates. We are estimating a series of statistical models to identify correlations between demographic and human mobility data (e.g. does age, race, gender, income level predict social distancing metrics?) and are using mobility and demographic data to train epidemiological models designed to predict the impact of policies around reopening and vaccination.

Remote Meeting

High-Throughput Experiments on Group Dynamics

To achieve replicable, generalizable, scalable, and ultimately useful social science, we believe that is necessary to rethink the fundamental “one at a time” paradigm of experimental social and behavioral science. In its place we intend to design and run “high-throughput” experiments that are radically different in scale and scope from the traditional model. This approach opens the door to new experimental insights, as well as new approaches to theory building.

Common Sense

This project tackles the definitional conundrum of common sense head-on via a massive online survey experiment. Participants are asked to rate thousands of statements, spanning a wide range of knowledge domains, in terms of both their own agreement with the statement and their belief about the agreement of others. Our team has developed novel methods to extract statements from several diverse sources, including appearances in mass media, non-fiction books, and political campaign emails, as well as statements elicited from human respondents and generated by AI systems. We have also developed new taxonomies to classify statements by domain and type.

News

Commonsensicality: A New Platform to Measure Your Common Sense

Most of us believe that we possess common sense; however, we find it challenging to articulate which of our beliefs are commonsensical or how “common” we think they are. Now, the CSSLab invites participants to measure their own level of common sense by taking a survey on a new platform, The common sense project.
Since its launch, the project has received significant media attention; it was recently featured in The Independent, The Guardian, and New Scientist, attracting over 100,000 visitors to the platform just this past week.

CSSLab 2024 End-of-Summer Research Seminar Recap

On August 2nd, ten undergraduate and Master’s students showcased their research at the third annual Student Research Mini-Conference, which featured presentations from all four major research groups at the Computational Social Science Lab (CSSLab) at Penn: PennMAP, COVID-Philadelphia/Human Mobility, Group Dynamics, and Common Sense. Here are the highlights from this conference: 

CSSLab Establishes Virtual Deliberation Lab to Reduce Affective Polarization

When a Republican and a Democrat sit down to discuss gun control, how is it going to go? Conversations between Republicans and Democrats can be either productive or polarizing and social scientists want to understand what makes conversations between people from competing social groups succeed, as positive conversations have proven to be one of the most effective ways to reduce intergroup conflict. However, when conversations go poorly, they can instead increase polarization and reinforce negative biases.