I saw some great interest this morning in the idea of a survey grading site.
Inspired by yet another awful questionnaire design (one that had been sent to the market research community itself, ironically), I threw out the idea half-joking.
I was thrilled to see responses to the idea from great tweeps like @MDMktingSource @conversition @MargaretRoller.
Could this crazy idea have legs?
One idea: Perhaps a volunteer committee of 6 experienced researchers would get together once a month or so (virtually, of course), to review and grade questionnaires? We could come up with an agreed upon system for the grading process.
And if we are daring enough, post a best and worst list?
As Annie pointed out, “Perhaps a survey rating system would encourage the quality #MRX companies to say no to bad surveys?
”
And I have to confess, a “wall of shame” does appeal to me.
What do you think? Is this worthwhile? Would it encourage good survey design? Could we get a sponsor for it?? Are there legal issues? Please add comments, ideas, and sponsor nominations in comments.
Thanks!
18 comments
This is a good idea but doomed to failure I think. Why? Because the majority of these surveys are quick and dirty. By the time they went before some committee or panel the company/agency/student/postgrad will have taken their results for better or worse (usually worse) and will have presented them to someone as bona fide findings, ‘scientifically’ robust and so on.
Yet
But couldn’t it raise awareness? I can imagine the best/worst list being picked up in press. Maybe I am too optimistic, but I think if we can raise awareness (and highlight good and bad examples), it could have an impact.
Plus, I have seen some bad surveys from “professional” researchers that would make you cry.
I love the idea but do see some issues.
Researchers generally don’t qualify to answer MR surveys, we would have to…. lie… to receive them. And I know none of us are doing this.
I would want there to be a six month trial period where each company receives the results privately. That way, if they wish to improve their surveys, they can pull their socks up before they are embarrassed publicly. I’m still uncomfortable about public criticism.
I do recognize that clients may provide poor quality surveys to the research companies and so you could say that some bad surveys are the fault of clients. But, it makes more sense to me that bad surveys are the result of MR companies not doing their job of fixing, improving, or saying no to bad research.
If an organization like CASRO led the initiative, I would feel a lot better.
Part 2
If we take the medical industry as an example – and medicine claims to be one of the stars of Science many of the approaches and surveys carried out in this area are dogged with bad or misguided methodology. I don’t think we will convince anybody outside the industry to eitherpay a competent MR company or at least make sure who ever it is to at least have some understanding of survey methodology.
Raising awareness of poor surveys is a step forward. But at the same time it will raise awareness that this survey software is available online and hey why should we pay for an agency when we can do the darn thing for ourselves for nothing.
Maybe I’m getting too old and cynical.
My first reaction is…yes. But I think for this to really have an impact it has to be conducted at a fairly high level — in terms of the types/sponsors of online questionnaires graded, the research principles by which the instruments are graded, etc. Otherwise it won’t provide the serious discussion/awareness we need on design issues.
I definitely love the idea – examples of good and bad questions and when to use what would be beneficial. But, would we put our own surveys out there, too? What about confidentiality and ownership? What I would love is an unofficial forum (maybe a linkedin sub group) where we can post certain PIA surveys and kvetch to each other about them – like a group, survey therapy forum.
Great stuff–thanks folks.
I like the idea of CASRO or another organization sponsoring it. And, grudgingly, no public lynchings might be a good idea 😉
And I agree that if done, quality is key. It would have to be a system.
Rob: As for raising awareness of online surveys itself…that’s happening anyway. The horse is out of the barn, don’t you think?
Yes the horse is out of the barn and it’s no good trying to close them cos simply we can’t. What we will see is a proliferation as these things leak out into other social media – smart Phones , surveys via txt/SMS messaging, not to mention all the qualitative stuff being carried out by unqualified ‘facilitators’. I would argue it’s the role of the MRS here in the UK and the AMRS (if that’s the parallel) to ‘police’ surveys that are in the public domain more. But they ain’t got no teeth if you ain’t a member.
As it’s already been mentioned I get dumped out of most surveys as soon as I click I’m a researcher. But I do ‘lie’ and look through the survey out of professional interest I simply never complete them
To be honest I haven’t got a solution. What we have to do collectively is make sure that our ‘product’ offers better value for money, more insight, better more robust methodologies, betterdesign and so on (hmm did I say value for money) when you start listing the stuff we offer companies ought to realize market research isn’t a cost it’s and investment. (I’d highlight that if I could on my iPhone)
Sorry if this is unduly negative, but I think this is pie in the sky stuff. I don’t see how it could be in an agencies interest to subscribe – even doing so is an ackowledgement that maybe they are writing bad questionnaires and perhaps they’d better get someone external to check objectively whether they know what they’re doing or not. You could do it on a ‘mystery respondent’ basis I suppose, but I don’t think the agencies would take to kindly to that sort of underhand tactic.
Agencies and clients need to take responsibility for this stuff themselves. If they don’t then it will cost them both commercially in the long run. The bottom line is the rating system that matters.
AJ, I agree–agencies would not subscribe, It would disrupt their process too much.
Perhaps a proposed set of evaluation metrics? A grading system that could be used by those that are seeking improvement?
Legally, if someone sends me a survey, unsolicited, can I assume it is public domain?
I like Michelle’s idea a lot.
My other question would be where do you draw the lines. For example, how long a survey is too long? I think we’d all agree the shorter the better, but I don’t think there is, or can be an agreed ideal standard. Also, whilst there are some widely accepted truths about the basics of questionnaire design, there are also elements that come down to personal style or company philosophy around how and when questions should be asked. Unless people are signing up voluntarily and agreeing on the standards set, I don’t suspect anyone would pay the ratings any heed. If a client commissions a project and they believe (rightly or wrongly) that it’s served their objectives, will they care if an external panel of self-appointed experts rates the questionnaire 1/10?
Research craft is important and almost certainly in decline, but I don’t think this is the answer.
I agree AJ–some items can’t be captured in a “rule.” But I think some can.
or another option, a list of questions. Sort of “10 questions to ask yourself before you send out a survey invitation.” I like the 26 questions ESOMAR posted about buying online sample. Perhaps CASRO or the MRA should have something similar, but for questionnaires.
Some great ideas, and probably different ways it could be developed.
1) A service to clients to get an independent check on the questionnaire from their suppliers
2) A service to researchers to give them a sense of how their questionnaires compare
3) A service for client DIYers who want to check they have not don’t
I think it would probably be great to start with some really basic checks, such as duplicated breaks, missing breaks, spelling mistakes, surveys that take more than X, screens that need scroll bars to see all of a grid.
Issues such as the use of Web 2.0 techniques are more problematic with contradictory evidence accumulating, for example Bernie Mailinoff’s finding that they can generate worse data.
We would probably have to restrict ourselves to how surveys appeared, I don’t think that in the near future there is much scope in checking the adaptive DCM created by somebody else and expecting to give an informed opinion in real time.
Cultural issues will also need to come into account, for example those pesky words and phrases that are different between USA, UK, and Australia, not to mention countries like Switzerland, India, and Canada that require more than one language.
But I think we should keep the idea moving.
In terms of trade bodies, I think the only one who could easily do it without a conflict of interest are the buyers organisations. For example, if you were paying membership fees to CASRO would you want them ruling that your survey was rubbish, but that a non-member’s was good (and if it was the other way round, would you believe it).
Ray
Thanks Ray! I think your definitions of 1 and 3 are the ones most liekly to succeed.
Just 1 question:
Can you give me an example of a trade org that is a ‘”buyers organisation”? In the US anyway, most seem to have more supplier members than client-side members.
Another thought. It would be great if an academic such as Mick Couper or Jon Krosnick would head up something, it might even raise funds for their faculty.
Ray
To quote this blog, “market research is successful when it leads to decisions being made and actions being taken.” How would you propose to grade surveys without the key criteria?
A wise person* once advised “measure what you should, not just what you can.” Would it really be meaningful to grade surveys without knowing the business objectives, the details of the decision being supported, the strategy and context that prompted the research initiative? I’m not suggesting there are not bad surveys, but a well constructed survey can be a part of a project that completely misses the mark. On that point, is the survey the most important part of the research process? The idea of grading surveys would seem to emphasize the survey at the expense of the rest of the process (e.g. choice of methodology, sample design, analysis).
In my opinion, it’s unlikely market researchers will “earn a seat at the table” where critical decisions are being made by demonstrating their tactical myopia to one another.
* The wise person is Pat LaPoint http://marketingnpv.com, no affiliation other than reading his great book.
Hi Geoff, a great point, especially “a well constructed survey can be a part of a project that completely misses the mark”. I agree completely!
Still, I can’t help but think that trying to address some basics is a teensy step in the right direction. I have been toying with how to test the concept…
K.