2,935 crowdsourcing and crowdfunding sites
Editor's Note: Thanks to Henry Sauermann at the Georgia Institute of Technology for the following summary.
A growing amount of scientific research is done in an open collaborative fashion, in projects that are sometimes labeled as “crowd science”, “citizen science”, or “networked science”. While successful projects such as Galaxy Zoo, Foldit, or Polymath have attracted much attention, we know little about some of the general benefits and challenges crowd science projects face. Moreover, it is not even clear how similar or different such projects are from “traditional” science, or from other types of projects such as innovation contests.
In a new research study, authors Chiara Franzoni and Henry Sauermann begin to tackle these questions. The authors begin with a brief description of three popular crowd science projects, giving readers a feel for how these projects are organized and what kinds of scientific tasks can be accomplished. The authors then develop a taxonomy to compare crowd science to other “regimes of knowledge production”. In doing so, they highlight that crowd science projects are characterized by two types of “openness”: First, crowd science projects are open to the contributions of a wide range of participants, contrary to what is common in traditional science. Second, projects tend to make logs and data openly available, again differentiating then from traditional science as well as innovation contests and other types of crowdsourcing where such outputs are typically kept secret.
The authors then discuss how these high degrees of openness are likely to result in certain types of benefits and challenges that crowd science projects may face. The benefits include access to a broader pool of skills and knowledge, including not only “cheap labor” but also access to rare skills some of which may be quite different from what is typically taught in traditional science education. Other benefits include a broader geographic coverage (for projects that collect certain types of data) as well as more transparency for purposes of verification of results.
A particularly interesting aspect of this paper is that the authors clearly recognize that not all crowd science projects are created equal. Indeed, they distinguish projects by the levels of skills required (from common skills to expert skills) and by the complexity of the tasks contributors carry out (from simple data collection to collective problem solving). These dimensions, in turn, suggest that different projects may face different benefits and different challenges (see figure 1).
On the challenges side, the authors highlight the need to modularize projects, to coordinate and integrate distributed activities, and potential incentive conflicts among different types of contributors. The authors also draw on organizational research and studies of open source software development to conjecture how some of these challenges may be addressed. While many of these ideas have not been tested in the context of crowd science, they should be of great interest to scientists who think about starting a crowd science projects. Franzoni and Sauermann also discuss implications for funding agencies, policy makers, and briefly consider whether and how firms may get involved in crowd science.
The full study is available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2167538.