Homa Hosseinmardi and Sam Wolken of the Computational Social Science Lab (CSSLab) were recently invited to speak at the Political and Information Networks Workshop on April 25-26. This workshop was organized by the Center for Information Networks and Democracy (CIND), a new lab under the Annenberg School of Communication. CIND studies how communication networks in the digital era play a role in democratic processes, and its research areas include Information Ecosystems and Political Segregation (or Partisan Segregation).

Homa’s “Epistemologies and Data Limits” round table talk 

Homa is a senior research associate at the CSSLab and an incoming faculty member at the UCLA Department of Communication. Her research has mainly been centered on the realms of online safety and responsible AI, with a collaborative focus on analyzing societal challenges which emerge from socio-technical systems.

At the “Epistemologies and Data Limits” round table, Homa provided insights on the importance of high-quality data-driven studies and the constraints of data in the study of socio-technical systems.

Homa Hosseinmardi, Ph.D.

 As the core of her research starts with learning from data, she noted, “The availability of large, representative datasets has made it increasingly difficult to ignore the critical role observational data plays in the study of the impact sociotechnical systems in our society, especially when lab experiments are not feasible due to ethical or practical concerns.”

At the same time, Homa discussed the importance of the relevance of the data to the problem under study when assessing socio-technical systems. She adds, “We can easily fall into the trap of wrong takeaways when relying on data to tell us the story, and should constantly look for issues such as missingness, biases, and representativeness.”

She also pointed out the complexity of drawing conclusions from data, as what we observe online is a result of many factors such as user interests, social events, and algorithms; this makes it challenging, as even if we observe the emergence of any behavior, to conclude how each factor individually influences the final outcome.

Sam’s talk on “Research Priorities in the Era of AI” 

 Sam Wolken is a 4th year PhD student in Communications and Political Science; his research is primarily focused on news production, including what publishers (news outlets) choose to write on, how they frame their narratives, and how audiences consume news content.

At the “Research Priorities in the Era of AI” round table talk, Sam discussed how Large Language Models (LLMs) have dramatically reduced the resources necessary for some tasks in computational social science, such as data annotation. This helps to reduce the inequity between researchers and, in many cases, allows for more detailed measurement in scientific research, leading to more descriptive explanations of social phenomena and new opportunities to test theories and predictions.

Sam Wolken

Sam Wolken

 However, Sam emphasized that “this is a pivotal time to consider responsible use of AI in social science research.” With tools such as LLMs rapidly becoming embedded in research workflows, there is a need to establish norms and standards that govern how we evaluate research that relies on AI technology.

He also addressed some potential downsides of AI in social science workflows that should not be ignored, such as “increasing the distance between researchers and the data they analyze and introducing subtle bias that may elude researchers without rigorous validation.”

Both Sam and Homa shared their valuable insights into AI which encouraged meaningful discussion among the other invited scholars. Their contributions to this workshop highlight the need for research integrity and the responsible use of AI in the computational social sciences. 

With many ongoing projects in the CSSLab involving large-scale data sets, the use of LLMs has greatly increased the efficiency of projects that would otherwise take much longer to complete with human annotation alone. This has made conducting research more accessible in the lab, but also it is important to be mindful that there isn’t uncritical reliance on AI. 

AUTHORS

Delphine Gardiner

Senior Communications Specialist