PHILADELPHIA, January 11, 2022 —

Does explicitly acknowledging bias make us less likely to make biased decisions? A new study examining how people justify decisions based on biased data finds that this is not necessarily the case.

Narrative-based explanations of what makes companies successful are ubiquitous in both academic and popular presses. However, these types of “success stories” are known to be fraught with bias. For instance, disproportionate attention is placed on wildly successful “unicorn” companies, while the far greater number of unsuccessful companies often goes unaccounted for. Moreover, attempts to decipher what makes companies successful based on these examples are similarly skewed; explanations often highlight certain shared traits of successful companies while ignoring others, or turn a blind eye to the unsuccessful companies which exhibit the same “successful” traits. As a result, almost any feature of interest can appear to be associated with success as long as there exist at least some examples of such an association.

But do these biases distort the beliefs of readers in important ways? Or are they simply entertaining stories that aren’t meant to be interpreted as recipes for success and so don’t cause any harm?

In a new paper, “Success stories cause false beliefs about success,” published in Judgment and Decision Making, CSSLab director Duncan Watts, along with co-authors George Lifchits, Ashton Anderson, Daniel Goldstein, and Jake Hofman, show that these narratives do in fact cause readers to make incorrect inferences about reality—even after acknowledging the biases—and that the effects are large enough to matter.

Testing biased data’s persuasive potential

Using a large-scale experiment, Lifchits et al. examined the ways in which widely read—but clearly partial—success narratives affect the choices that people make, how confident they are in those choices, and the justifications they provide for them.

Participants were tasked with predicting whether a startup founded by a college graduate or a college dropout is more likely to become a billion-dollar “unicorn” company. Before making their decision, each participant was presented with either a set of successful college graduates, a set of successful college dropouts, or no data, and was required to verify that they understood the underlying bias in the examples shown. They were then asked to bet on either an unnamed graduate founder or an unnamed dropout founder, to indicate how confident they were in their decision, and to optionally provide justification of their bet.

Despite the participants acknowledging the bias in the data they were presented with, Lifchits et al. found that simply showing biased examples of successes substantially affected their beliefs relative to not showing examples at all. Participants who saw examples of graduate founders bet on an unnamed graduate founder 87% of the time, compared to only 32% of participants who were shown examples of dropout founders and 47% of participants shown no data.

While these numbers by themselves may be evidence of biased narratives’ ability to sway individual decisions, they do not necessarily indicate the power to significantly shift beliefs. If participants were generally unsure of which founder to choose and were only mildly influenced by the examples they were shown, one could expect them to report low levels of confidence in their decisions.

Interestingly, however, the authors observed the opposite: despite seeing opposing examples or no data at all, the overwhelming majority of participants expressed substantial levels of confidence. What’s more, 92% of participants provided genuinely motivated justifications for their bets, indicating a tendency to spontaneously generate causal explanations—such as college graduate founders being more motivated, or college dropout founders being more creative—to rationalize their decisions even in the absence of supporting evidence.

A threat beyond fake news

Lifchits et al.’s research has worrying implications for the information ecosystem surrounding topics such as politics, science, and health, where technically correct but misleadingly presented data can be widely persuasive. Their work shows that it is possible to lead individuals to arrive at unwarranted conclusions using manipulation that would easily pass a conventional fact check—and, moreover, that being aware of bias alone is not enough to offset these effects.

Beyond building off of the literature on decision-making and bias, this research dovetails nicely with the CSSLab’s Penn Media Accountability Project (PennMAP), which aims to detect patterns of bias and misinformation in media from across the political spectrum. Lifchits et al. underscore the need to broaden the study of misinformation to encompass content which is factually correct but significantly biased, a mission which PennMAP is pursuing using large-scale cross-platform media data. Such timely research will help to paint a more comprehensive picture of how biased narratives shape individual and collective beliefs, the consequences of irresponsible information distribution, and the importance of media accountability.

Read the full paper published in Judgment and Decision Making here.

AUTHORS

EMMA ARSEKIN

Communications Specialist