PBS Newshour, February 2015
By Jenny Marder
Synopsis by Research Rockstar intern Sarah Sites
Many researchers struggle to find willing survey takers. What if there were a website to which they could post their surveys, exchanging a small incentive in return for successful completion? Great idea, right? Considering the increasing popularity of Amazon’s Mechanical Turk, many researchers apparently think so. On this site, surveys are advertised as small jobs, and a pool of micro-workers, some of them dedicated, full-time survey completers, fill out thousands of surveys per week. It seems like a win-win situation. But what about data quality?
Survey research is part art, part science, and producing well-crafted questions is only the beginning. Getting qualified, willing survey takers is essential and often the most painful aspect. Surprisingly, according to research conducted by the University of Texas at Austin, “Turker” data is just as reliable as data gleaned through traditional survey methods, such as online and university surveys. Princeton psychology professors join those at Texas in agreeing that Turkers are also considerably more representative of the general American population than students are. And that’s a good thing, too. According to one researcher, the median Turker participates in twenty academic studies per week as opposed to the one per week completed by student subjects.
But the biggest problem surrounding Mechanical Turk is the question of surveys involving the need for a gut instinct. As NewsHour’s Jenny Marder states, “[Turkers are] seeing the same questions repeated again and again…It’s common for researchers to test intuition…if somebody’s answered a question a hundred times or even three times, they’re no longer getting the intuitive response. They’re getting a much more trained response.” For research in intuition-heavy disciplines such as social psychology, this can be detrimental to producing quality results. Therefore, Mechanical Turk may be more reliable for certain types of studies, and should likewise be avoided for others. There is also the question as to whether Turkers pay adequate attention to the surveys they take. Guidelines recommend that researchers incorporate questions to measure this, but as Turkers can attest, the questions tend to be the same.
Because of this, Turkers can then recognize these test questions.
In the end, there is no perfect way to conduct survey research. Like all systems, the Mechanical Turk option has advantages and disadvantages, but for better or for worse, the website seems to be thriving and growing in popularity.
If researchers are clued in to the limitations of using the site, the integrity of survey research can be maintained. Perhaps it can be used for pretests? For simulated data needs? Or perhaps it is worth conducting experiments against panel-based research projects? The idea of using Turkers may make market researchers squirm, but it is an option.
Read more about Turkers here:
Are you concerned about the quality of your surveys?
Consider taking our 10 Point Checklist for Questionnaire Design course.
Visit our website to learn more: http://bit.ly/10PtCklist