I was talking with a long-time business friend the other day about a qualitative market research project he has underway, and how it is imploding on him. He was frustrated because the project is behind schedule, and he feels his market research agency is not giving enough options for getting things back on track.
We chatted about the symptoms and possible solutions, and I thought I’d share them here as a brief case study. The context: 50 in-depth interviews (IDIs) are being conducting among IT professionals about experiences using a specific type of software. The problem: a screener was used to get the best participants, but some of the participants seem disturbingly unqualified. And how can you trust the research results if you think the participants are giving weak input?
Possible Solutions?
A. Review the screening criteria—maybe the screener needs to be updated. For example, did you screen for purchase authority but not hands-on management? In many tech categories including software, the person who might approve the product or brand selection is not always the person who installs or uses the product—those can be very different roles. If you qualify someone on purchase involvement they may not be able to talk knowledgeably about user interface, features, reliability, and so on.
B. Is this a result? If so, can the data still be useful? If you think you nailed the screener but people still don’t seem qualified, is that a research finding? Are people who use your project actually fairly ignorant about it? Maybe people are using your product differently than you intended? Maybe they aren’t even aware of some of its features? They may be legitimate customers, but you may be getting a flash of reality about your product’s use.
C. Was the sample source legit? If not, find alternative sources. Hopefully this is not the case. But if the research agency drew the original names from which they screened participants from an iffy source…you could have people who misrepresented themselves in order to get the honorarium. If you suspect this is the case, add a couple of sneaky knowledge questions to the screener (in this case, I suggested asking about their familiarly with other related product brands).
Then throw in a couple of red herrings.
To make this example a little more precise, this would be like asking a consumer, “Which brands of HDTV have you evaluated in the past 3 months? Sony, Samsung, Star Screens, or Panasonic?” Anyone who selects “Star Screens” is out. OK, that was too obvious, but you get the point. If you try an approach like this and get a lot of charlatans, you should seek out alternate sample sources.
I hope that mini-case study was useful. Any questions or comments? Please post them here or leave a message on our Blog Comments line: 508 691 6004 ext 703.