My PhD journey began with a clear vision: to unravel the interplay between social network structures and their collective outcomes. I was particularly interested in the collective intelligence arising in those structures. With several projects already underway on this topic, I felt prepared. Perhaps optimistically, or some might think naively, I chose to tackle the literature review of my dissertation —often considered the “easy part”— during the first year of my PhD. always been interested in how people think, something that drew him to study literature as an undergraduate, and, now, to investigate the intersection between public opinion, local news, and politics. 

Originally published in
MACK INSTITUTE COLLECTIVE IMPACT

by Abdullah Almaatouq
November 14, 2023

However, the deeper I waded into the literature, the murkier the waters became. The sheer volume of studies was overwhelming, but quantity wasn’t the only issue. Contradictory findings didn’t just pepper the landscape—they seemed to dominate it. On one side I read studies that sang the praises of social interactions, emphasizing their role in fostering social learning and the emergence of collective intelligence. Counterbalancing those studies was a chorus of papers cautioning against these very principles, warning of the homogenization of thought and the dilution of diverse ideas. It felt like I was navigating a labyrinth where every paper added layers of confusion. This apparent incoherency (Watts, 2017) transformed what I initially thought to be an “easy” part of doing a PhD into a seemingly impossible challenge. 

I found myself questioning am I simply navigating the fallout of the well-known replication crisis in research? Or, are the inconsistencies merely reflections of known challenges in empirical research, like small-N samples, p-hacking, HARK-ing, researcher degrees of freedom, and publication bias? I wasn’t dealing with an abstract, academic conundrum; it was a tangible hurdle to pen a coherent literature review. The more I reflected, the more I realized that the issue hinted at broader methodological challenges in social science research. This realization, combined with a fortunate internship with Duncan Watts, had me pivot my research focus. Instead of studying social networks, my dissertation became an exploration of the apparent lack of cumulativeness in the social and behavioral sciences—a result of what Alan Newell termed “playing twenty questions with nature.” 

Last year, my colleagues and I collected our thoughts into a target article soon to be published in BBS. We deep dive into the lack of cumulativeness in experimental social and behavioral sciences and argue that it stems from the problem of incommensurability—where individual experiments often operate in theoretical silos; thus making it difficult, if not impossible, to compare findings across studies. To address this challenge, we introduce the idea of an “integrative experiment design.” In general terms, the traditional approach, which we call the “one-at-a-time approach” to experimentation, starts with a single, often very specific, theoretically informed hypothesis. In contrast, the integrative approach starts from the position of embracing many potentially relevant theories. All sources of measurable experimental-design variation are potentially relevant, and decisions about which parameters are relatively more or less important are to be answered empirically. The integrative approach proceeds in three phases: 

  1. Define a comprehensive, multi-dimensional design space for the phenomenon of interest. 
  2. Sample strategically from this space, aligned with the objectives. 
  3. Integrate the results to develop theories that can address the observed outcome variations.  

But what does this mean? 

The integrative approach begins by clearly defining the design space of all possible experiments in a particular domain of interest. Experiments that have already been done can be placed within specific coordinates along axes representing the degrees of freedom in the experimental design, while those not yet undertaken represent areas to explore. The important takeaway here is the method’s inherent ability to pinpoint both differences and similarities between any pair of experiments focused on a shared outcome. Put simply, this method ensures commensurability from the get-go. 

One practical issue with the integrative approach is how the design space’s size increases, especially as more dimensions are identified. Thankfully, several existing methods can help researchers navigate these high-dimensional spaces effectively. 

Finally, just like the traditional one-at-a-time method, the end goal of the integrative approach remains the formulation of solid, cohesive, and progressively built theoretical explanations. Yet, the process varies notably. Instead of always seeking new, distinct theories, the emphasis shifts to identifying the extent or limits of current theories, which often involves understanding complex interactions among existing constructs.  

But how do we determine the dimensions of the design space? Given the resource constraints, how can we best devise sampling strategies? What implications does it have for the nature of theory in our fields? And could this approach inadvertently concentrate research power among a few, potentially intensifying research disparities? For an in-depth discussion of these questions and more, I encourage you to read our target article, the accompanying commentaries, and our [response to the commentaries]. As our field continues to discuss these approaches then we will more fully realize collective impact. 

 

AUTHORS

Abdullah Almaatouq

Abdullah_Almaatouq

Affiliated Researcher

Massachusetts Institute of Technology