Crowdsourcing Running head: CROWDSOURCING
Crowdsourcing: A Model for Leveraging Online Communities Daren C. Brabham, Ph.D. University of North Carolina at Chapel Hill
Forthcoming in: The Routledge Handbook of Participatory Cultures Edited by Aaron Delwiche & Jennifer Henderson
Correspondence to: UNC School of Journalism & Mass Communication Carroll Hall, CB 3365 Chapel Hill, NC 27599 (919) 962-0676 office (801) 633-4796 cell email@example.com www.darenbrabham.com
Author note: Daren C. Brabham, Ph.D., is an assistant professor in the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. Early iterations of the crowdsourcing typology presented here were developed through a 2009 white paper with Noah Friedland for the Friedland Group, and presentations at the 2010 Stakeholder Engagement online conference and the 2010 American Planning Association conference in New Orleans. Portions of this chapter were drawn from the author’s dissertation at the University of Utah, directed by Professor Joy Pierce. Date of draft: March 18, 2011
Crowdsourcing Crowdsourcing: A Model for Leveraging Online Communities As our understanding of participatory cultures advances, there is a growing interest among practitioners and scholars in how best to take charge of the creative, productive capabilities of Internet users for specific purposes. A number of online businesses in the past decade have actively recruited individuals in online communities to design products and solve problems for them, often motivating an online community’s creative output or harnessing their creative input through the format of an open challenge with various rewards. Organizations that issue specific tasks to online communities in an open call format engage in the practice of “crowdsourcing.” Crowdsourcing is a model for problem solving, not merely a model for doing business (Brabham, 2008a; Brito, 2008; Fritz et al., 2009; Haklay and Weber, 2008). The crowdsourcing model is also well suited to organizations’ marketing and public relations goals, as the process of managing an online community allows organizations to forge close relationships with publics and allows consumers to participate in the making of brands (Phillips and Brabham, 2011). Thus, it is important to understand how crowdsourcing works so that the collective intelligence of online communities can be leveraged in future participatory media applications for the public good. In this chapter, I further define the crowdsourcing model by putting forth a typology of crowdsourcing. Ultimately, these types may inform the design of future participatory media applications for governments, non-profits, and activists hoping to solve pressing political, social, and environmental problems. The Basics of Crowdsourcing
Jeff Howe, a contributing editor for Wired magazine, coined the term “crowdsourcing” in a June 2006 article (Howe, 2006c). In a companion blog, Howe (2006a) offered the following definition of crowdsourcing:
Crowdsourcing Simply defined, crowdsourcing represents the act of a company or institution taking a
function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. . . . The crucial prerequisite is the use of the open call format and the large network of potential laborers. (para. 5). It is important to emphasize that this process is one that is sponsored by an organization, and that the work of the large network of people—the “crowd”—is directed or managed by this organization throughout the process. This is very different from, say, the online encyclopedia project Wikipedia, where an open space exists for individuals to work collaboratively. No one at Wikipedia, for example, issues specific tasks to the online community there and manages the creation of articles. It is a process directed and managed by others on the site. Wikipedia, then, is not crowdsourcing, but rather a different and equally important participatory culture phenomenon that Benkler (2002) calls “commons-based peer production.” The same is true of open source methods, processes most common to software production. Commons-based peer production and open source methods share with the crowdsourcing model the notion of openness and the use of the Internet as a collaboration platform. But while these phenomena may seem quite organized and managed, they are organized from the bottom-up rather than from the topdown by a sponsoring organization issuing the task. Crowdsourcing, on the other hand, blends an open creative process with a traditional, top-down, managed process. Crowdsourcing is necessarily dependent on the Internet. The speed, reach, anonymity, opportunity for asynchronous engagement, and ability to carry many forms of media content makes the Internet a crucial prerequisite for crowdsourcing. Certainly these processes can be taken offline with some success, but the platform of the Internet elevates the quality, amount, and pace of cooperation, coordination, and idea generation to a point that warrants its own
Crowdsourcing classification. Cultures have always been participatory, long before the Internet, with roots in democratic process, collective decision making, and cooperation for survival. But participatory cultures on the Internet take on a new quality, a new scale, and new capabilities. Furthermore, all crowdsourcing types rely on the notion of collective intelligence. Pierre Lévy (1995/1997) conceived of collective intelligence as a “form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills” (p. 13). The Internet is the technology capable of this degree of coordination of intellect, and thus as the capabilities of the Internet grow, so do the possibilities for leveraging this intellect. Given the will to act, problem solving with collective intelligence and networks can be scaled-up to address even global concerns (Ignatius, 2001). Finally, all individuals engaged in a crowdsourcing application, or any aspect of participatory culture, are in some way motivated to participate. This may seem obvious, but
understanding how and why individuals participate in crowdsourcing applications is necessary to design effective problem solving applications going forward. A number of interviews and surveys have been conducted at various crowdsourcing sites, with each study asking individuals in those crowds to explain why they participate (Brabham, 2008b, 2010a, 2010b; Lakhani et al., 2007; Lietsala and Joutsen, 2007). These studies indicate that there are many common reasons why people participate, both intrinsic and extrinsic, but there is no single motivator that applies to all crowdsourcing applications. For instance, the opportunity to develop one’s creative skills, build a portfolio for future employment, and challenge oneself to solve a difficult problem are motivators which emerge among several crowdsourcing cases, but some crowds are driven by financial gain and do not mention these motivators. Drawing from these existing studies, some motivations for individuals in crowds that emerge across more than one case include
Crowdsourcing the desire to earn money; to develop one’s creative skills; to network with other creative professionals; to build a portfolio for future employment; to challenge oneself to solve a tough problem; to socialize and make friends; to pass the time when bored; to contribute to a large project of common interest; to share with others; and to have fun.
In regards to motivations, then, crowdsourcing is not that different of a phenomenon from other forms of participatory culture, such as blogging (e.g., Liu et al., 2007), creating open source software (e.g., Bonaccorsi and Rossi, 2004; Hars and Ou, 2002; Hertel et al., 2003; Lakhani and Wolf, 2005), posting videos to YouTube (e.g., Huberman et al., 2009), contributing to Wikipedia (e.g., Nov, 2007), or tagging content at Flickr (e.g., Nov et al., 2008). Generally speaking, members of a participatory culture, including crowds, “believe their contributions matter and feel some degree of social connection with one another” (Jenkins, 2006, p. 3). Toward a Typology of Crowdsourcing Applications Some of the notable case studies of crowdsourcing help illustrate how the model functions in four different approaches and how the model resembles a problem solving process. There are four dominant crowdsourcing types: the knowledge discovery and management approach, the broadcast search approach, the peer-vetted creative production approach, and distributed human intelligence tasking (see Table 1).
Crowdsourcing Table 1. A Crowdsourcing Typology
Type Knowledge Discovery and Management How it Works Organization tasks crowd with finding and collecting information into a common location and format Kinds of Problems Ideal for information gathering, organization, and reporting problems, such as the creation of collective resources Ideal for ideation problems with empirically provable solutions, such as scientific problems Examples Peer-to-Patent peertopatent.org SeeClickFix seeclickfix.com InnoCentive innocentive.com Goldcorp Challenge Defunct Threadless threadless.com Doritos Crash the Super Bowl Contest crashthesuperbowl.com Next Stop Design nextstopdesign.com Distributed Human Intelligence Tasking Organization tasks crowd with analyzing large amounts of information Ideal for large-scale data analysis where human intelligence is more efficient or effective than computer analysis Amazon Mechanical Turk mturk.com Subvert and Profit subvertandprofit.com
Organization tasks crowd with solving empirical problems
Peer-Vetted Creative Production
Organization tasks crowd with creating and selecting creative ideas
Ideal for ideation problems where solutions are matters of taste or market support, such as design or aesthetic problems
The Knowledge Discovery and Management Approach In the knowledge discovery and management approach, online communities are challenged to uncover existing knowledge in the network, thus amplifying the discovery capabilities of an organization with limited resources. The assumption is that a wealth of disorganized knowledge exists “out there,” and a top-down, managed process can efficiently disperse a large online community of individuals to find specific knowledge and collect it in specific ways in a common repository. This crowdsourcing type most closely resembles commons-based peer production, such as at Wikipedia, except with one crucial difference: a sponsoring organization determines exactly what information is sought, for what purpose, and
Crowdsourcing how that information is to be assembled. In this approach, the more users there are and the more involved they are, the better the system functions, a fact that could very well be applied to most participatory culture phenomena. The Peer-to-Patent Community Patent Review project is an exemplar of the knowledge discovery and management approach to crowdsourcing (Noveck, 2006). Peer-to-Patent was a pilot project from 2007 to 2009 between New York Law School and the U.S. Patent and Trademark Office (USPTO), with support from a number of major corporate patent holders. In the Peer-to-Patent project, the USPTO siphoned off a small number of patent applications it received to an online community. Working for no monetary reward, this online community of
more than 2,000 reviewed applications for evidence of “prior art.” Prior art is any evidence that a similar invention already exists that would negate the originality of a patent application. These findings were then routed back to the USPTO. Overburdened and backlogged with patent applications, the USPTO then used these findings to help determine whether new patents should be awarded. Another example of the knowledge discovery and management approach is SeeClickFix. SeeClickFix is a Web site that allows people to report non-emergency problems in their local community, either by using the SeeClickFix Web site or a free mobile phone application. These problems include potholes, graffiti, malfunctioning traffic signals, obstructed wheelchair access ramps on sidewalks, and other issues of public safety and disrepair. City governments, as well as journalists, use SeeClickFix as an intelligence gathering mechanism to better understand issues facing a community and to better allocate resources to fix the problems. According to a SeeClickFix spokesperson, “on average, more than 40 percent of issues reported on the site get resolved” (Smith, 2010, para. 13).
Crowdsourcing The Broadcast Search Approach Broadcast search approaches to crowdsourcing are oriented towards finding the single specialist with time on his or her hands, probably outside the direct field of expertise of the problem, who is capable of adapting previous work to produce a solution. In theory, the wider the net cast by the crowdsourcing organization, the more likely the company will turn up the
“needle in the haystack,” that one person who knows the answer. The broadcast search approach is appropriate for problems where a provable, empirically “right” answer exists, but that answer is simply not yet known by an organization. Broadcasting the problem in an open way online draws in potential solutions. Scientific problems, such as developing new chemicals and materials or locating resources for minding using geophysical data, are best suited to the broadcast search approach. In the broadcast search approach, monetary rewards are common for individuals in the crowd who provide a solution to a challenge, though financial incentive is not the only motivation for these crowds to participate in these arrangements. InnoCentive, founded in 2002, focuses on providing research and development solutions for a broad range of topic areas, from biomedical and big pharmaceutical concerns to engineering and computer science topics. An exemplar of the broadcast search approach, InnoCentive boasts a community of dozens of client companies, called “Seekers,” and an online community of 165,000 “Solvers.” Seeker companies issue difficult scientific challenges to the Solver community, with cash awards ranging from US$5,000 to US$1 million. According to Lakhani et al. (2007), “[s]olution requirements for the problems are either ‘reduction to practice’ (RTP) submissions, i.e., requiring experimentally validated solutions, such as actual chemical or biological agents or experimental protocols, or ‘paper’ submissions, i.e., rationalized theoretical solutions codified through writing” (p. 5). Submitted solutions are never seen by other Solvers;
Crowdsourcing only Seekers pour over submissions. Solvers with winning solutions are awarded the cash bounties in exchange for the Seeker company taking ownership of the intellectual property, and
InnoCentive receives a fee from the Seeker company for listing the challenge and facilitating the process. The problem set in broadcast search approaches consists of difficult, if well defined and scoped, scientific and engineering challenges. Lakhani et al. (2007) conducted a statistical analysis of the InnoCentive service between 2001 and 2006. They found that the Solver community was able to solve 29% of the problems the Seekers—all large companies with internal labs and researchers—posted after they were unable to solve these problems internally. Moreover, the results found a positive correlation between the distance the Solver was from the field in which the problem was presented and the likelihood of creating a successful solution. That is, Solvers on the margins of a disciplinary domain—outsiders to a given problem’s domain of specialty—performed better in solving the problem. The Goldcorp Challenge was a similar broadcast search crowdsourcing case (Tischler, 2007). Goldcorp, a Canadian gold mining company, developed the Challenge in March 2000. According to a company press release, “participants from around the world were encouraged to examine the geologic data [from Goldcorp’s newly acquired Red Lake Mine in Ontario] and submit proposals identifying potential targets where the next six million ounces of gold will be found” (“Goldcorp,” 2001, para. 6). By offering more than US$500,000 in prize money to 25 top finalists who identified the most gold deposits, Goldcorp attracted “more than 475,000 hits” to the Challenge’s Web site and “more than 1,400 online prospectors from 51 countries registered as Challenge participants” (“Goldcorp,” 2001, para. 6). The numerous solutions from the crowd
confirmed many of Goldcorp’s suspected deposits and identified several new ones, 110 deposits in all. The Peer-Vetted Creative Production Approach The logic of the peer-vetted creative production approach is that by opening up the creative phase of a designed product to a potentially vast network of Internet users, some superior ideas will exist among the flood of submissions. Further still, the peer vetting process will simultaneously identify the best ideas and collapse the market research process into an instance of firm-consumer co-creation. It is a system where a “good” solution is also the popular solution that the market will support. Peer-vetted creative production is appropriate, then, for problem solving concerning matters of taste and user preference, such as aesthetic and design problems. Howe (2006b) calls Threadless one of the exemplar cases of crowdsourcing: “pure, unadulterated (and scalable) crowdsourcing.” Based in Chicago and formed in late 2000, Threadless is the flagship property of parent company skinnyCorp, whose motto is “skinnyCorp creates communities” (skinnyCorp, n.d.). Threadless is an online clothing company, and as of June 2006, Threadless was “selling 60,000 t-shirts a month, [had] a profit margin of 35 per cent [sic] and [was] on track to gross [US]$18 million in 2006,” all with “fewer than 20 employees” (Howe, 2006b, para. 1). At Threadless, the ongoing challenge to the registered members of the online community is to design and select silk-screen t-shirts. Members can download t-shirt design templates and color palettes for desktop graphics software packages, such as Adobe Illustrator, and create tshirt design ideas. They then upload the designs to a gallery on the Threadless Web site, where the submissions remain in a contest for a week. Members vote on designs in the gallery during
this time on a five-point rating scale. At the end of the week, the highest rated designs are finalist candidates for printing, and the Threadless staff chooses about five designs to mass produce each week. These “t-shirts are then produced in short production runs and sold on the site,” back to members in the online community (as well as to unregistered visitors to the site) through a typical online storefront (Fletcher, 2006, p. 6). Threadless awards winning designers US$2,000 in cash and US$500 in Threadless gift certificates in exchange for their intellectual property. User-generated advertising contests, such as the Doritos Crash the Super Bowl Contest, are also examples of the peer-vetted creative production approach (Brabham, 2009), as are participatory design contests, such as Next Stop Design (Brabham et al., 2010). Next Stop Design was an effort in 2009-2010 to crowdsource public participation for transit planning beginning with a competition to design a better bus stop shelter. The project, funded by the U.S. Federal Transit Administration, allowed participants to upload bus stop shelter designs to a gallery on the Next Stop Design Web site, and then to rate the designs of peers in the gallery. The three designs with the highest average score at the close of the four-month competition were declared the winners. Without any monetary incentive or promise to actually construct the winning designs, nearly 3,200 registered users submitted 260 bus stop shelter designs in the competition. Distributed Human Intelligence Tasking Different still from the previous cases is the distributed human intelligence tasking approach to crowdsourcing. This is an appropriate approach for crowdsourcing when a corpus of data is known and the problem is not to produce designs, find information, or develop solutions. Rather, it is appropriate when the problem itself involves processing data. It is similar to the concept of large-scale distributed computing projects, such as SETI@home and Rosetta@home,
Crowdsourcing except replacing spare computing cycles with humans engaged in short cycles of labor. Large
data problems are decomposed into small tasks requiring human intelligence, and individuals in the crowd are compensated for processing the bits of data. Because this crowdsourcing approach is certainly the least creative and intellectually demanding for individuals in the crowd, monetary compensation is a common motivator for participation. The most notable example of the distributed human intelligence tasking approach is Amazon Mechanical Turk (Barr and Cabrera, 2006). At Mechanical Turk, “Requesters” can use the site to coordinate a series of simple tasks they need accomplished by humans, tasks that computers cannot easily do, such as accurately tagging the content of images on the Internet for a search engine. Individuals in the Mechanical Turk community, known as “Turkers,” can then sign up to accomplish a series of these “human intelligence tasks” (HITs) for very small monetary rewards paid by the Requester. Mechanical Turk essentially coordinates large-scale collections of simple tasks requiring human intelligence. This kind of distributed human intelligence tasking can be seen in other cases. For example, Subvert and Profit uses this format to coordinate the gaming of social media sites such as Digg and StumbleUpon (Powazek, 2007). Confidential clients pay Subvert and Profit to distribute rating tasks for certain stories and Web sites to a crowd of registered users, who can each make small amounts of money for performing the tasks. Calling their product “social media optimization,” Subvert and Profit claims to have placed thousands of content items on the front pages of high-traffic sites like Digg, resulting in millions of views for paid items. On its site, the company estimates its method is “30 to 100 times more cost effective than conventional Internet advertising” (“FAQ,” n.d.). Limitations of Crowdsourcing
Crowdsourcing There are a number of potential issues surrounding the crowdsourcing model that are worth exploring. First, for crowdsourcing to be successful, it must rely on a robust, active,
motivated crowd. Though much research has been done about online communities, there is still no coherent set of best practices for organizations hoping to build and sustain these kinds of online communities. We know that a good deal of time and attention must be paid by an organization to grow an online community, we know these communities need to be motivated to participate, and we know that crowds can turn on an organization in ways that damage a brand’s reputation (e.g., Bosman, 2006), but our understanding of fickle online communities is still quite undeveloped. Online community management will likely become an important sub-field of public relations and marketing as more organizations integrate crowds into their operations in coming years. Second, crowdsourcing requires a great deal of transparency and trust on the part of an organization. To open a challenge to an online community requires an organization to specify the parameters of a given problem, which may require the organization to expose its proprietary data, its inner workings, or its anxieties and weaknesses. To leverage the power of crowds, organizations must surrender a bit of their own power by letting online communities become meaningful stakeholders. Not all organizations or industries are willing and able to do this. Third, crowdsourcing applications can be manipulated and gamed just like any other aspect of participatory culture. The success of a company like Subvert and Profit, for instance, casts doubt on the organic, democratic virtues of so-called peer-recommended news aggregation sites such as Digg. Competitions, especially those in the peer-vetted creative production vein, can be flooded with fraudulent votes or phony accounts. Essentially, one cannot claim that a crowdsourcing application is completely “of the people, by the people,” as surely some people
exert additional influence through subversive means in these applications. Crowdsourcing should never be claimed as a complete replacement for more secure, regulated forms, especially if crowdsourcing is used in government affairs. Issues of cheating aside, there is a fourth limitation in claiming that a crowdsourcing process resulted in something that “the people” wanted. This limitation concerns representation. Since crowdsourcing occurs on the Internet, and since Internet access is lower among the economically disadvantaged and racial and ethnic minorities, we can never fully claim a design that wins in a crowdsourcing competition is what is wanted by all. In fact, making this claim works to mask critical conversations about technology access and democratic participation in these crowdsourcing forms. If crowds are relatively homogenous and elite in their makeup, then crowdsourcing applications may work to reproduce the hegemonic values of those in power through a kind of aesthetic tyranny of the majority. We must take great care in the ways we talk about crowdsourcing’s virtues. Lastly, there is a valid complaint that crowdsourcing is exploitive. Compared to the profits Threadless makes on the sale of its crowd-made products, for instance, the prize money earned by winning designers is quite small. And even very large cash prizes at InnoCentive for successful Solvers likely result in enormous profits for the scientific companies who secure the intellectual property rights to an invention. These industry shifts brought on by crowdsourcing and other participatory media processes have also driven down the prices graphic designers and other creative professionals were once able to command for their work. Crowdsourcing, then, may well favor the crowdsourcing organization at the expense of the individual laborer in the crowd. Conclusion
Crowdsourcing is one specific form of participatory social media, part of a greater media landscape that includes open source production, commons-based peer production, blogging, video-posting and photo sharing sites, massively multiplayer online games, and other forms. It is unique from these other forms, however, in that it involves an organization-user relationship whereby the organization executes a top-down, managed process that seeks the bottom-up, open, creative input of users in an online community. Because an organization issues specific tasks to an open community and manages the process, crowdsourcing can leverage the power of crowds to tackle specific challenges in specified formats and on planned schedules. Thus this element of management is what makes crowdsourcing different, productive, and full of potential to do good. Each of these various crowdsourcing types—knowledge discovery and management, broadcast search, peer-vetted creative production, and distributed human intelligence tasking— can be employed in specific contexts to accomplish certain goals. Depending on the nature of a problem, the type of input needed from a crowd, and understanding what motivates these crowds to participate in a specific task environment, any number of new media tools could be designed to meet the needs of an organization in search of a solution to a problem. And why not design these tools to serve the public good, rather than focus entirely on for-profit applications? Why not draw upon this typology of crowdsourcing applications to make governance more efficient and inclusive, to search for difficult scientific solutions, to craft better public policy, or to otherwise leverage the collective intelligence of online communities to improve the human condition? By opening up the problem solving process and managing the input of crowds to address focused needs, crowdsourcing could be used to improve public participation in the crafting of government policies, injecting more of the voice of the people in democratic processes. Or crowdsourcing could be used to innovate cleaner forms of energy or develop better
Crowdsourcing medicines and cures for diseases. Or perhaps crowdsourcing can facilitate public art projects,
redesign public transit systems, and help government agencies collect data or enforce compliance with laws in a community. Crowdsourcing may very well be a model for solving the world’s most challenging problems, channeling the energies of participatory cultures for a greater purpose.
Jeff Barr and Luis Felipe Cabrera, 2006. “AI Gets a Brain: New Technology Allows Software to Tap Real Human Intelligence,” ACM Queue, volume 4, number 4, pp. 24-29. Yochai Benkler, 2002. “Coase’s Penguin, or, Linux and The Nature of the Firm,” Yale Law Journal, volume 112, number 3, pp. 369-446. Andrea Bonaccorsi and Cristina Rossi, 2004. “Altruistic Individuals, Selfish Firms?: The Structure of Motivation in Open Source Software,” First Monday, volume 9, number 1, at http://www.firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1113/1033, accessed 15 September 2010. Julie Bosman, 2006. “Chevy Tries a Write-Your-Own-Ad Approach, and the Potshots Fly,” New York Times (4 April), at http://www.nytimes.com/2006/04/04/business/media/04adco.html, accessed 18 March 2011. Daren C. Brabham, 2008a. “Crowdsourcing as a Model for Problem Solving: An Introduction and Cases,” Convergence: The International Journal of Research into New Media Technologies, volume 14, number 1, pp. 75-90. Daren C. Brabham, 2008b. “Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application,” First Monday, volume 13, number 6, at http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2159/1969, accessed 15 September 2010.
Daren C. Brabham, 2009. “Crowdsourced Advertising: How We Outperform Madison Avenue,” Flow: A Critical Forum on Television and Media Culture, volume 9, number 10, at http://flowtv.org/?p=3221, accessed 15 September 2010. Daren C. Brabham, 2010a. Crowdsourcing as a Model for Problem Solving: Leveraging the Collective Intelligence of Online Communities for Public Good. Unpublished doctoral thesis, University of Utah. Daren C. Brabham, 2010b. “Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application,” Information, Communication & Society, volume 13, number 8, pp. 1122-1145. Daren C. Brabham, Thomas W. Sanchez, and Keith Bartholomew, 2010. “Crowdsourcing Public Participation in Transit Planning: Preliminary Results from the Next Stop Design Case,” paper presented at the annual meeting of the Transportation Research Board of the National Academies, Washington, DC. Jerry Brito, 2008. “Hack, Mash, & Peer: Crowdsourcing Government Transparency,” The Columbia Science and Technology Law Review, volume 9, pp. 119-157. “FAQ,” n.d. Subvert and Profit, at http://subvertandprofit.com/content/faq, accessed 18 March 2011. Adam Fletcher, 2006. Do Consumers Want to Design Unique Products on the Internet? A Study of the Online Virtual Community of Threadless.com and Their Attitudes to Mass Customisation, Mass Production and Collaborative Design. Unpublished bachelor’s thesis, Nottingham Trent University, UK. Steffen Fritz, Ian McCallum, Christian Schill, Christoph Perger, Roland Grillmayer, Frédéric Achard, Florian Kraxner, and Michael Obersteiner, 2009. “Geo-Wiki.org: The Use of
Crowdsourcing Crowdsourcing to Improve Global Land Cover,” Remote Sensing, volume 1, number 3, pp. 345-354. “Goldcorp Challenge Winners!,” 2001. The Goldcorp Challenge, at http://www.goldcorpchallenge.com/challenge1/winnerslist/challeng2.pdf, accessed 2 February 2008. Mordechai (Muki) Haklay and Patrick Weber, 2008. “OpenStreetMap: User-generated Street Maps,” IEEE Pervasive Computing, volume 7, number 4, pp. 12-18. Alexander Hars and Shaosong Ou, 2002. “Working for Free?: Motivations for Participating in Open Source Projects,” International Journal of Electronic Commerce, volume 6, number 3, pp. 25-39.
Guido Hertel, Sven Niedner, and Stefanie Hermann, 2003. “Motivation of Software Developers in the Open Source Projects: An Internet-based Survey of Contributors to the Linux Kernel,” Research Policy, volume 32, number 7, pp. 1159-1177. Jeff Howe, 2006a. “Crowdsourcing: A Definition,” Crowdsourcing: Tracking the Rise of the Amateur (Weblog, 2 June), at http://crowdsourcing.typepad.com/cs/2006/06/crowdsourcing_a.html, accessed 15 September 2010 Jeff Howe, 2006b. “Pure, Unadulterated (and Scalable) Crowdsourcing,” Crowdsourcing: Tracking the Rise of the Amateur (Weblog, 15 June), at http://crowdsourcing.typepad.com/cs/2006/06/pure_unadultera.html, accessed 15 September 2010. Jeff Howe, 2006c. “The Rise of Crowdsourcing,” Wired, volume 14, number 6 (June), at http://www.wired.com/wired/archive/14.06/crowds.html, accessed 15 September 2010.
Bernardo A. Huberman, Daniel M. Romero, and Fang Wu, 2009. “Crowdsourcing, Attention and Productivity,” Journal of Information Science, volume 35, number 6, pp. 758-765. David Ignatius, 2001. “Try a Network Approach to Global Problem-solving,” International Herald Tribune (29 January), at http://www.iht.com/articles/2001/01/29/edignatius.2.t_1.php?page=1, accessed 15 September 2010. Henry Jenkins (with Ravi Purushotma, Katherine Clinton, Margaret Weigel, and Alice J. Robison), 2006. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Chicago: The MacArthur Foundation, at http://www.newmedialiteracies.org/files/working/NMLWhitePaper.pdf, accessed 15 September 2010. Karim R. Lakhani, Lars Bo Jeppesen, Peter A. Lohse, and Jill A. Panetta, 2007. “The Value of Openness in Scientific Problem Solving,” Harvard Business School Working Paper, number 07-050, at http://www.hbs.edu/research/pdf/07-050.pdf, accessed 15 September 2010. Karim R. Lakhani and Robert G. Wolf, 2005. “Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects,” In: Joseph Feller, Brian Fitzgerald, Scott A. Hissam, and Karim R. Lakhani (editors). Perspectives on Free and Open Source Software. Cambridge, Mass.: MIT Press, pp. 3-22. Pierre Lévy, 1997. Collective Intelligence: Mankind’s Emerging World in Cyberspace (trans. Robert Bononno). New York: Plenum. (Original work published 1995) Katri Lietsala and Atte Joutsen, 2007. “Hang-a-rounds and True Believers: A Case Analysis of the Roles and Motivational Factors of the Star Wreck Fans,” In: Artur Lugmayr, Katri
Crowdsourcing Lietsala, and Jan Kallenbach (editors). MindTrek 2007 Conference Proceedings. Tampere, Finland: Tampere University of Technology, pp. 25-30. Su-Houn Liu, Hsiu-Li Liao, and Yuan-Tai Zeng, 2007. “Why People Blog: An Expectancy Theory Analysis,” Issues in Information Systems, volume 8, number 2, pp. 232-237. Oded Nov, 2007. “What Motivates Wikipedians?,” Communications of the ACM, volume 50, number 11, pp. 60-64. Oded Nov, Mor Naaman, and Chen Ye, 2008. “What Drives Content Tagging: The Case of Photos on Flickr,” In: Margaret Burnett, Maria Francesca Costabile, Tiziana Catarci,
Boris de Ruyter, Desney Tan, Mary Czerwinski, and Arnie Lund (editors). Proceedings of the 26th Annual SIGCHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, pp. 1097-1100. Beth Simone Noveck, 2006. “Peer to Patent”: Collective Intelligence, Open Review, and Patent Reform,” Harvard Journal of Law & Technology, volume 20, number 1, pp. 123-262. Laurie Phillips and Daren C. Brabham, 2011. “How Today’s Digital Landscape Redefines the Notion of Control in Public Relations,” paper presented at the 14th annual International Public Relations Research Conference, Coral Gables, FL. Derek Powazek, 2007. “Exploring the Dark Side of Crowdsourcing,” Wired (11 July), at http://www.wired.com/techbiz/media/news/2007/07/tricksters, accessed 15 September 2010. skinnyCorp, at http://www.skinnycorp.com, accessed 15 September 2010. Abbe Smith, 2010. “SeeClickFix Celebrates 50G Issues Reported,” New Haven Register (7 August), at
Crowdsourcing http://www.nhregister.com/articles/2010/08/07/news/aa3_neseeclickfix080710.txt, accessed 18 March 2011. Linda Tischler, 2007. “He Struck Gold on the Net (Really),” Fast Company (19 December), at
http://www.fastcompany.com/magazine/59/mcewen.html, accessed 15 September 2010.