In general we believe that we possess common sense to a certain extent, but have you ever wondered if what you perceive to be common sense is also considered common sense to others? In other words, is common sense actually common?

The answer remains elusive in large part due to a lack of empirical evidence. To address this problem, CSSLab Senior Computational Social Scientist Mark E. Whiting and CSSLab Founder and Director Duncan J. Watts introduce an analytical framework for quantifying common sense in their paper titled: “A framework for quantifying individual and collective common sense.”  

Quantifying common sense at the individual and collective level, Whiting and Watts (W&W) analyzed a dataset of ratings from 2,046 participants with a diverse range of demographic and socioeconomic backgrounds who had evaluated 4,407 claims. They also introduced two measures: Commonsensicality, or the state of being in alignment with common sense, at the level of individual people or claims, and pq common sense, a measure of how large the cliques of shared beliefs are within a given population and corpus of claims. In the individual setting, a Commonsensicality score of 1 would indicate that raters agree on most claims and assume that others agree with them as well, whereas Commonsensicality closer to zero would appear in a setting where common sense is not so common. 

To measure individual commonsensicality, raters were randomly assigned 50 claims (both human and AI-generated) each and answered two main questions for each claim: 1) whether they agreed or disagreed with the claim and 2) have them predict if the majority of participants would agree or disagree with the claim. Based on responses, W&W found that common sense is more dependent on the nature of statements and is associated more with straightforward, factual statements about physical reality. Surprisingly, there was little variation of what people believed to be common sense among different demographics, such as race and political leaning.

Insights from a Belief Graph

W&W gathered the participants’ responses and compiled them into a belief graph which represents the relationships between the raters and claims. The raters’ responses were organized into bicliques, subgroups of people and claims where everyone agrees on every claim. Since it was not feasible for the raters to evaluate all 4,407 claims, a model was trained to predict all possible remaining responses of the participants, including their personal beliefs and the beliefs of others. 

These two variables yielded four different combinations of outcomes: “I agree and think most others will agree, I agree and think most others don’t agree, I don’t agree and think most others agree, and I don’t agree and think most others don’t agree.” Additional factors such as the raters’ demographics and characteristics of claims such as category and epistemological properties (social vs. physical reality, opinions vs. facts) supported the model in predicting these outcomes. 

Then, the Bron-Kerbosch clique-finding algorithm was used to identify cliques within the complete predicted graph that contained at least one belief, which built on those cliques by adding connected beliefs to have a more extensive belief profile, generating larger cliques that can be used to measure collective common sense. The analysis of PQ common sense based on the sizes of these cliques demonstrates that common sense is not so common  — most cliques include few people or few claims — what one considers common sense is largely shaped by the individual.

Implications for AI Integration

Notably, the authors’ approach to quantifying common sense is valuable to AI, providing a foundation for AI to build advanced models which will be able to generate common sense knowledge based on a combination of existing knowledge and human reasoning. But in order to better integrate common sense knowledge into AI systems, what common sense means to humans should be better understood. 

Common sense is composed of two main elements: knowledge and common sense reasoning, which is how people use knowledge to make context-based decisions. Due to the limitations of how much common sense knowledge a person can possess, it is integral that AI algorithms be able to generate content which aligns with common sense and make decisions in an unpredictable, open environment. Investing more in training AI systems to improve their decision-making abilities and natural language processing will lead to a more robust system that can more accurately simulate commonsensicality. 

Though the intersectionality of common sense and AI is a more complex topic, understanding common sense quantitatively is a good starting point in yielding more novel findings and addressing the historical, philosophical, and social significance that common sense knowledge holds.

Learn more about our work on common sense at: commonsensicality.org

Read the full paper here

AUTHORS

DELPHINE GARDINER

Delphine Gardiner

Communications Specialist