2,527 crowdsourcing and crowdfunding sites
Editor's Note: The following is a guest post from one of our contributing experts, Panos Ipeirotis, an Associate Professor and George A. Kellner Faculty Fellow at the Department of Information, Operations, and Management Sciences at Leonard N. Stern School of Business of New York University. This post appeared originally on his blog and is re-posted here with permission.
I had finished reading the paper "Inefficient Hiring in Entry-Level Labor Markets" by Amanda Pallais, an assistant professor of Economics at Harvard University.
This is the first paper that I have read that provides experimental evidence that labor markets are "not efficient" in the following way: If we have a new worker, or a worker with no known past history, we do not know what the worker can and cannot do. Most employers will not hire this worker due to this lack of knowledge. And since the worker is never hired, nobody is able to leave feedback about the performance of the worker. This leads to a vicious cycle for the new entrants, that cannot break into the market because they do have feedback, and they cannot get feedback because they cannot get into the market.
While this phenomenon is known, it was not obvious that lack of feedback is causing this inefficiency. The alternative explanation was that good workers will find work to do, and bad workers simply do not get jobs because they do not even know how to apply and enter the market efficiently.
What Amanda did was pretty interesting. She created a randomized experiment. She used oDesk and opened a position for data entry, a position that required pretty much no special skills. She received approximately 3000 job applications. Out of these, she hired randomly 1000 workers. The 2000 non-hired workers formed the "control" group. Within the 1000 workers, she created two groups. One that received a detailed public feedback and evaluation, and another that received a generic, uninformative feedback (e.g., "Good work"). Given the randomized selection, the differences in the future evolution of the workers were pretty much the result of the treatments in this controlled field experiment.
The results were revealing:
Even more notable, the benefit of the workers that received the "you get a job" treatment, did not come at the expense of other workers. Employment increased and the money that were "wasted" to conduct the experiment (the tasks were not useful to anyone) generated enough return to cover the cost.
In principle, oDesk may want to engage into such "wasteful" hiring just to get workers to bootstrap and start with some meaningful feedback in their profiles: When you create an account at oDesk, you get a random job (for which nobody cares) and then the quality of the submitted work is evaluated, to generate some meaningful feedback for the worker (e.g., "great at setting up a map reduce task on Amazon Web Services")
Or, perhaps, they can skip the wasteful part, and use crowdsourcing as a perfectly valid mechanism for generating this valuable public feedback by letting people do actual work.
Crowdsoucing as a solution to the cold start problem
Note how this need for early feedback so that workers can enter the market naturally leads to crowdsourcing as a solution to the entrance problem.
If getting a job is the blocker for starting your career, then crowdsourcing allows new entrants to pick jobs without having to worry about the interview process. Just pick an available task and do it.
The findings of the study also suggest that crowdsourcing by itself is not enough. Any crowdsourcing application that provides jobs should be accompanied by a detailed feedback/scoring system. For example, if the crowdsourcing platform is about, say, translation, then there should be public feedback that will list the tasks that the person completed (what language pairs, etc), and list the corresponding performance statistics (e.g., time taken to complete the task, quality of the outcome, etc.)
In a setting like this, crowdsourcing becomes not a novelty item but an integral part of any labor platform, facilitating entry of the workers. It is not a place where jobs get done on the cheap. It is the place that generates information about the quality of the workers, which in turn makes the workers more valuable to the firms.
Should crowdsourcing firms receive favorable treatment by the government?
So, if crowdsourcing tasks that generate *public* feedback for the performance of the participating workersbenefit the workers, the future employers, and the overall society (by decreasing unemployment), the question is why not encouraging companies to make more of their work available in such format. While a service like Mechanical Turk would not qualify (anonymity of workers, plus lack of reputation), other services that generate useful public information could be the focus of favorable legislation and/or tax treatment.
Perhaps it is time to give to crowdsourcing the attention and stature it deserves.
- Panos Ipeirotis is an Associate Professor and George A. Kellner Faculty Fellow at the Department of Information, Operations, and Management Sciences at Leonard N. Stern School of Business of New York University. He is also the Chief Scientist at Tagasauris, and in 2012-2013 serves as “academic-in-residence” at oDesk Research. His recent research interests focus on crowdsourcing and on mining user-generated content on the Internet. He received his Ph.D. degree in Computer Science from Columbia University in 2004, working with Prof. Luis Gravano. He has received three “Best Paper” awards (IEEE ICDE 2005, ACM SIGMOD 2006, WWW 2011), two “Best Paper Runner Up” awards (JCDL 2002, ACM KDD 2008), and is also a recipient of a CAREER award from the National Science Foundation and several other industry grants. In his spare time, he writes about crowdsourcing and various other topics on his blog, “A Computer Scientist in a Business School,” an activity that seems to generate more interest and recognition than any of the above.