Cowdsourcing.org The Industry Website

Register Login
or sign in with

Web's Largest listing of crowdsourcing and crowdfunding events

Events

Advertise
Web's Largest Directory of Sites

2,824 crowdsourcing and crowdfunding sites


Cloud Labor Scuffle: Ziptask, AutoMan, and MTurk’s Flaws
editorial

Cloud Labor Scuffle: Ziptask, AutoMan, and MTurk’s Flaws

Editor's Note: The following sponsored post comes to us from Ziptask's Seth Weinstein. It was originally published at Ziptask's Work 3.0 blog. 

Researchers at the University of Massachusetts have recently created AutoMan, a new cloud labor algorithm that intends to outsource not the worker, but the boss. New Scientist’s Douglas Haven reports that AutoMan is a fully automatic system that analyses and delegates tasks to human workers on Amazon Mechanical Turk. Where Ziptask simplifies task outsourcing via our task management team and a “set it and forget it” setup, AutoMan seeks to tackle the process completely automatically. If AutoMan is successful, it could end up wildly improving on the original Turk by automating oversight, the one remaining untouched process.

In a report published by the UMass researchers, the grievances against MTurk are laid out quite succinctly on the very first page. Turk doesn’t scale well to complicated tasks, it’s often difficult to determine the appropriate payment or time scale for a job, and there’s no guarantee that the finished work will be of acceptable quality. Being so similar, both Ziptask and AutoMan have their own unique ways of addressing these flaws.

Scale and Complexity

MTurk is great for simple tasks like identifying the subjects of photos, but when it comes to complicated, iterative, or interrelated tasks, its power often falls short. The problem lies in the fact that clients need to separate complex tasks into bite-sized chunks of work, which are better suited to the platform. Ziptask solves this problem with its team of project managers, who can break down and assign tricky tasks to multiple workers, or pore through their database for a worker who is qualified for all aspects of the task. Unfortunately, it does not appear as though AutoMan will have any innate capability to split up or delegate a task in such a way; perhaps this functionality will be addressed in a later update. We’ve discussed the strength of Ziptask’s scalability before, so I hope the UMass researchers have something good up their sleeves.

Payment and Time

Those who wish to assign work via MTurk not only have to format and post their task, but must also determine how long it should take and how much money they think it’s worth. Since task posters are already short on time by definition, this step becomes an unnecessary speed bump. Ziptask, again with its human team of supervisors, assigns prices to jobs automatically based on the difficulty and type of work. Since the labor is compensated per-minute, they’ll also determine a cutoff price to help you avoid going over budget. By contrast, AutoMan turns the process into trial-and-error based on a series of formulas. Price is calculated based on the duration of the work and federal minimum wage, and task time limits are set to 30 seconds by default. AutoMan will automatically adjust both the task price and time limit (upwards) if it’s not getting the results it requires. Clients can set these parameters to other defaults if the task requires, but the process is otherwise very standardized.

Quality Assurance

Any cloud labor platform, regardless of its makeup or the details of its process, will live and die by work quality. Who wants to pay for substandard results? Quality assurance is an absolute necessity, and MTurk has next to none built in. Ziptask once again turns to its supervision team, who personally make sure that every document is up to standards before presenting it to the client. The client provides the final pass/fail check, and no money changes hands until everyone agrees that the work makes the cut. AutoMan, by comparison, automates the process in the simplest possible way; it has multiple workers complete the task, and waits to see which results are the most common. The workers are paid once the majority has reached a statistically viable agreement, with no payment going to workers who provided incorrect answers.

Will My New Boss Be A Robot?

Rest assured, it’s probably not gonna happen anytime soon. The relative inflexibility of both the AutoMan algorithm and the MTurk interface mean that this combination is going to be very effective, but only for certain kinds of tasks. In a nutshell, this isn’t going to add any muscle to MTurk; it will continue to be bad at intricate or skill-based work, but good at work that’s just above “a monkey could do it”-level. The only difference is that the AutoMan algorithm could highly increase Turk’s effectiveness at completing these types of tasks. For all other office work, especially things that you can’t wait around for five or six workers to agree on, Ziptask is going to get you better results, faster, and most likely for a better price.

Flag This

18

Comments

  • Guest Bob Dec 17, 2012 02:54 pm GMT

    Great piece!

  • Guest Emery Berger Dec 20, 2012 10:40 pm GMT

    Thanks for the kind words about our AutoMan project! I would like to make a few small clarifications / corrections. With respect to its quality assurance algorithm: agreement by a majority is neither necessary nor sufficient for AutoMan's quality control algorithm. Also, it does not necessarily take many workers to achieve a high level of confidence in a result. For example, where there are many possible answers, agreement by two workers is enough to surpass AutoMan's default 95% confidence level. With respect to payment: AutoMan lets you cap the total budget for any computation overall, or for any particular task. Also, since AutoMan spawns all tasks in parallel, there is often little waiting time. One experiment we report on in the paper showed that we were able to get license plates recognized for about $0.12 each, in 2 minutes on average.

    Full details are in the paper that appeared at OOPSLA this October (http://dl.acm.org/citation.cfm?doid=2384616.2384663), also linked to from the http://automan-lang.org website.

    Emery Berger
    Associate Professor of Computer Science
    University of Massachusetts Amherst

Guest
 Join or Login
 Optional