by Stefanie Baumann
Crowdsourcing and its contributions to innovation in firms and public organizations have been researched from as early as 2011 (Aitamurto, Leiponen and Tee, 2011).
The term itself is defined as the use of collective intelligence (Levy, 1997). What might be a surprisingly social thought definitely is a statistical truth – human knowledge delivers the most accurate results when a diverse group that is representative for the whole population is selected. In other words, the “human” factor that normally induces failure in our increasingly technological world can be minimized by a distributed group of humans – or in a more popular term: by the “wisdom of the crowd” (Surowiecki, 2004). The aim of this article is to examine the benefits that a crowdsourced approach can bring to academic research.
In a broad sense, we can depict 3 different ways of how crowdsourcing can be applied in academia. First and most prominent, research projects that lack official funding, can be co-funded by a community. This can eliminate researchers’ monetary shortages and although enabling academic research, it is not a direct influential factor for the way the research is conducted. Secondly, experts can engage in a community of their own and crowdsource their research. The access to qualified individuals on a large scale offers the chance to generate better results in a shorter period of time. While researchers are commonly organized in interdisciplinary groups with members coming from different faculties from all over the world, the coordination of such projects may become a lot less expensive and efficient.
A very central idea behind crowdsourced research is presented in the third argument. Scott E. Page, a professor from the University of Michigan argues that diversity is a central element for superior performance and that it enhances accuracy (The New York Times, 2008). Where crowdsourcing finds its raison d’être in the argument of representativeness, the same applies for diversity as a principle of quantitative research. Hence, a crowdsourced approach to quantitative research provides a high level of representativeness. Apparently, this argument holds true only when the “crowd” or community reflects the characteristics of the group to be assessed.
As of 2012, Google Scholar recognized 3,000 academic papers that made use of this relatively new approach that is in a large part attractive for the fields of psychology, behavioural economics as well as social and political science (The Economist, 2012). Crowdsourcing enables less biased psychological investigation while overcoming national boundaries, but most important it allows for a variety of demographic characteristics of the participants, e.g. ethnicity, social class, educational background and members from different political systems.
The rise of crowdsourcing platforms such as San-Francisco-based Crowdflower or Amazon-owned Mechanical Turk was primarily thanks to computer scientists, product developers and market researchers, rather than the academic community. However, this trend has been slowly changing. While Amazon’s Mechanical Turk attracts a great amount of its workforce from countries such as India with low labour costs, we also find crowdsourcing platforms in Europe, for example the Berlin-based startup WorkHub that also offers so-called “cloudworking” solutions.
Instead of relying on cheap workers, users can be attracted both by monetary incentives and by the interaction that triggers gaming behavior and ultimately increases the users’ intrinsic motivation. The relevance of intrinsically motivated test persons for academic research is still being explored, but the overall benefit remains – researchers can organize funding for less subsidized fields of study or more deliberately exchange knowledge with colleagues all over the world. Crowdsourced research consistently delivers quantitative accuracy, gained through a variety of test persons – ultimately a gain in representative evidence for academia.
About the Author
Stefanie Baumann is a graduate student from the Berlin School of Economics and Law. Her prioritized field of study lies in political economy and the critical factor of innovation. She is currently working for a Berlin-based startup, WorkHub, a platform for cloudworking solutions.
- Aitamurto, T., Leiponen, A. and Tee,R. (2011). The Promise of Idea Crowdsourcing – Benefits, Contexts, Limitations. White Paper. Published by: www.ideasproject.com, June 2011. Available: online.
- Cavender-Bares, K. (2011). Sentiment Analysis Milestone: More Than One Million Human Judgments.Published by: www.crowdsourcing.org, July 10. Available: online.
- Dreifus, C. (2008). In Professor’s Model, Diversity = Productivity. The New York Times, January 8. Available: online.
- Howe, J. (2006). The Rise of Crowdsourcing. Wired Magazine Online, June 14. Available: online.
- Lévy, P.(1997). Education and Training: New Technologies and Collective Intelligence. Prospects. Vol. 27 (2), pp. 248-263.
- Mirus, J., 10 Gründe für den Einsatz von Crowdsourcing in der Marktforschung. Published by “result – Institut für digitalen Wandel”, August 20. Available: online.
- Surowiecki, J. (2004). The Wisdom of Crowds. US: Anchor Group.
- The Economist (2012). The Roar of the Crowd: Crowdsourcing is transforming the science of psychology. In: Experimental Psychology. The Economist, May 26. Available: online.