Crowdsourcing – How the “wisdom of the crowd” breaks new grounds for academic research

Source: US National Library of Medecine

by Stefanie Baumann
Crowdsourcing and its contributions to innovation in firms and public organizations have been researched from as early as 2011 (Aitamurto, Leiponen and Tee, 2011).

The term itself is defined as the use of collective intelligence (Levy, 1997). What might be a surprisingly social thought definitely is a statistical truth – human knowledge delivers the most accurate results when a diverse group that is representative for the whole population is selected. In other words, the “human” factor that normally induces failure in our increasingly technological world can be minimized by a distributed group of humans – or in a more popular term: by the “wisdom of the crowd” (Surowiecki, 2004). The aim of this article is to examine the benefits that a crowdsourced approach can bring to academic research.

In a broad sense, we can depict 3 different ways of how crowdsourcing can be applied in academia. First and most prominent, research projects that lack official funding, can be co-funded by a community. This can eliminate researchers’ monetary shortages and although enabling academic research, it is not a direct influential factor for the way the research is conducted. Secondly, experts can engage in a community of their own and crowdsource their research. The access to qualified individuals on a large scale offers the chance to generate better results in a shorter period of time. While researchers are commonly organized in interdisciplinary groups with members coming from different faculties from all over the world, the coordination of such projects may become a lot less expensive and efficient.

A very central idea behind crowdsourced research is presented in the third argument. Scott E. Page, a professor from the University of Michigan argues that diversity is a central element for superior performance and that it enhances accuracy (The New York Times, 2008). Where crowdsourcing finds its raison d’être in the argument of representativeness, the same applies for diversity as a principle of quantitative research. Hence, a crowdsourced approach to quantitative research provides a high level of representativeness. Apparently, this argument holds true only when the “crowd” or community reflects the characteristics of the group to be assessed.

As of 2012, Google Scholar recognized 3,000 academic papers that made use of this relatively new approach that is in a large part attractive for the fields of psychology, behavioural economics as well as social and political science (The Economist, 2012). Crowdsourcing enables less biased psychological investigation while overcoming national boundaries, but most important it allows for a variety of demographic characteristics of the participants, e.g. ethnicity, social class, educational background and members from different political systems.

The rise of crowdsourcing platforms such as San-Francisco-based Crowdflower or Amazon-owned Mechanical Turk was primarily thanks to computer scientists, product developers and market researchers, rather than the academic community. However, this trend has been slowly changing. While Amazon’s Mechanical Turk attracts a great amount of its workforce from countries such as India with low labour costs, we also find crowdsourcing platforms in Europe, for example the Berlin-based startup WorkHub that also offers so-called “cloudworking” solutions.

Instead of relying on cheap workers, users can be attracted both by monetary incentives and by the interaction that triggers gaming behavior and ultimately increases the users’ intrinsic motivation. The relevance of intrinsically motivated test persons for academic research is still being explored, but the overall benefit remains – researchers can organize funding for less subsidized fields of study or more deliberately exchange knowledge with colleagues all over the world. Crowdsourced research consistently delivers quantitative accuracy, gained through a variety of test persons – ultimately a gain in representative evidence for academia.

About the Author
Stefanie Baumann is a graduate student from the Berlin School of Economics and Law. Her prioritized field of study lies in political economy and the critical factor of innovation. She is currently working for a Berlin-based startup, WorkHub, a platform for cloudworking solutions.

  • Aitamurto, T., Leiponen, A. and Tee,R. (2011). The Promise of Idea Crowdsourcing – Benefits, Contexts, Limitations. White Paper. Published by:, June 2011. Available: online.
  • Cavender-Bares, K. (2011). Sentiment Analysis Milestone: More Than One Million Human Judgments.Published by:, July 10. Available: online.
  • Dreifus, C. (2008). In Professor’s Model, Diversity = Productivity. The New York Times, January 8. Available: online.
  • Howe, J. (2006). The Rise of Crowdsourcing. Wired Magazine Online, June 14. Available: online.
  • Lévy, P.(1997). Education and Training: New Technologies and Collective Intelligence. Prospects. Vol. 27 (2), pp. 248-263.
  • Mirus, J., 10 Gründe für den Einsatz von Crowdsourcing in der Marktforschung. Published by “result – Institut für digitalen Wandel”, August 20. Available: online.
  • Surowiecki, J. (2004). The Wisdom of Crowds. US: Anchor Group.
  • The Economist (2012). The Roar of the Crowd: Crowdsourcing is transforming the science of psychology. In: Experimental Psychology. The Economist, May 26. Available: online.
Print Friendly, PDF & Email

2 thoughts on “Crowdsourcing – How the “wisdom of the crowd” breaks new grounds for academic research

  1. Great article thank you very much especially for your well researched examples — I will take this as an opportunity to motivate the students to pick this topic for a bachelor thesis!

    On the anecdotal level, I remember well how when the web first hit the headlines (in the mid-1990s) very quickly there were predictions that collective intelligence would overtake individual intelligence in no time. I think a few A.I. people saw an opportunity to reanimate their science that had slowed down to nearly a standstill (and robots still not amount to much when compared to humans). Conceptually, it is probably not possible to discuss collective intelligence without referring to linked concepts like collective consciousness (a term introduced by French sociologist Emile Durkheim in 1893) as well as to CG Jung’s collective unconscious, the term of analytical psychology. Both these concepts are used, and have been used as synonyms for “wisdom of the crowd”.

    Your first two use cases for crowdsourcing and research are very practical and not hard to understand: like many other internet-related activities, the examples make use of scalability.

    The third ‘argument’ as you call it, in favor of crowd sourced research seems a lot trickier to me! It’s tempting to transfer the positive values that we have come to identify with diversity but, as you put it “diversity as a principle of quantitative research” is obviously a different beast. I think you give us a nicely balanced view of the issue in your short article. On one hand, a diverse data set sounds ideal for quantitative research. On the other hand, it’s less the diversity or the in-homogeneity of the data set that counts, but the question if it is a well understood and well controlled data set. To assert this condition, obviously studies will be required. I’m very interested to see what comes out!

Leave a Reply

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

Bild neu laden