Interdisciplinary social science and the limits of quantitative research

Social scientists are increasingly using quantitative interdisciplinary research methods in the hope of obtaining more nuanced, concrete findings. However, Avery Reyna argues that without proper foresight, relying on these approaches to describe interactions between people, countries, and more complex sociopolitical systems may be harmful to the field overall

Better understanding, but at what cost?

The last couple of decades has seen growing debate about the nature and purpose of quantitative analysis in the social sciences. From machine learning models to natural language processing, these novel methods are driving major advances. Yet, when conducted alongside work from other fields, such as mathematics or computer science, these new methods may constrain the approaches that have, until now, been vital to the social sciences. Qualitative analysis, so central to social science research, may begin to merge with more abstract methods. So where does that leave us?

A quantitative shift

There has been a noticeable shift in social science research towards quantitative means of analysis. Scientists may now be collecting more original data, rather than simply using government data. Or they may be making use of statistical programming languages such as Python and R. Either way, this broadens the potential scope of analysis across all disciplines.

There has been a noticeable shift in social science research towards quantitative means of analysis, allowing us to measure a broader range of variables

Researchers can now research subjects including war, conflict, and international affairs by measuring variables, such as power and reciprocity, which do not exist in the real world. This could transform our understanding of the field as a whole.

A limitation in novel methods

The novel methods of quantitative reasoning provide precision and clarity. Yet, there are limitations to the power of quantitative reasoning in research.

The research quality of computational methods is not guaranteed, even though the general consensus gives them an unofficial quality stamp. For months, you have to train machine learning models, programme intricate statistical procedures, and clean the datasets you have collected. All this is indeed hard work. And it opens up a bigger conversation. Is this perceived pay-out of better, faster, and more novel findings sufficiently observed, or monitored, to confirm that these results are comparatively better, or more reliable, than their qualitative counterparts?

Numerous examples exist of studies that distort results to support a narrative stemming from the researcher’s own intrinsic goals, or the agendas of others with similar attitudes. Results can also be exploited for ‘political’ use. For example, former UK members of parliament used singular examples of quantitative reasoning (one graph from an article) to make blanket claims about educational discrepancies between rich and poor children.

We, as researchers, must be aware of how computational thinking can misrepresent sociopolitical phenomena

True, these examples are sometimes due to a misunderstanding of quantitative analysis. Equally, they show that we, as researchers, must be aware of how computational thinking can misrepresent sociopolitical phenomena. Given the method’s perceived robustness, this misrepresentation then has a real-world impact.

Enabled and constrained

The social sciences interact with computational mediums in different ways. They enable some discoveries, and constrain others. This double-edged sword is better understood when comparing the findings of qualitative research against its number-based counterpart.

When conducting experiments that investigate survey responses or open-ended interview questions, a level of nuance is captured by systematically going through each instance of text and unearthing one’s own conclusions. With in-depth qualitative research, there is a more intimate level of care. On the other hand, natural language processing and other methods of computational linguistics can more easily comb through this data and produce outputs of various types.

Quantitative methods enable researchers to understand and process data. Unfortunately, however, they constrain the depth and nuanced understanding of large datasets produced by experiments. This trade-off is not foreign to the field. Indeed, it has been the subject of numerous debates for many years. However, it forces us to look internally at the goals of the social sciences. Do the methods we are employing bring us closer to describing the world around us?

Impact beyond the sphere of academia

The importance of the choices made lies in the real-world application of these methods, and how far they enlighten policymakers' decisions. The goal of quantitative social science is to create digestible models that accurately describe the events happening around us. Quantitative methods break down the complex social realities we experience into variables that are dynamic in nature.

However, this mindset can adversely affect the ways we discuss nuanced topics, from voting behaviour to intergenerational poverty. For instance, there has been meaningful work that quantitatively describes the phenomenon of democratic backsliding in contemporary American politics. Yet qualitative cultural theorists have questioned the theoretical underpinnings of these models for decades. Meanwhile, the quantitative approach assumes these theoretical positions to be valid.

We must first identify the problem we are trying to solve, before identifying the best methods to provide an answer

At what point does imposing computational modelling into a conversation that has been traditionally dominated by qualitative approaches benefit the social sciences in the long term? By no means am I insisting that quantitative researchers must have stellar and novel findings within each iteration of their research process in order to be taken seriously. The crux of my argument is caution. The messiness of our human interactions and sociopolitical systems cannot be simply put into datasets to be programmed, scaled, and engineered. We must first identify the problem we are trying to solve, before identifying the best methods to provide an answer.

Researchers should be empowered to ask significant questions without needing to figure out the means with which to answer them. Only then will the field have more complex, creative, and informed theories about our social, political, and economic systems.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Author

photograph of Avery Reyna
Avery Reyna
Undergraduate Researcher, University of Central Florida

Avery is an undergraduate student studying political science, sociology, and cultural anthropology.

His research spans disciplines, focusing primarily on computational social science, civil-military relations, and digital public infrastructure.

His work has been published in Political Violence at a Glance, Council on Foreign Relations, New America, and Just Security.

He tweets @avryryn

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2024 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram