During the June 23, AMA MRC TweetOff session with myself, Jeffrey Henning (@JHenning), and Cathy Harrison (@VirtualMR), one topic we debated was the role of anonymity in customer satisfaction surveys.
Cathy’s point, “Customer satisfaction surveys are for measuring, not intervening.”
And Jeffrey’s, “Follow up with every dissatisfied customer who takes a survey.”
To be frank, my opinion on this topic has changed in just the past year or two. Before then, I was an ardent believer that all research must by anonymous—no matter what. I felt that any direct follow-up would show research participants that their survey responses could result in unexpected communications—and even if “helpful”, this experience could still impact future willingness to participate in research.
But in the past couple of years, two things have happened:
- First, I have been working with many clients who need to show that market research is not an academic exercise. Who need to demonstrate that research can directly, immediately, have positive outcomes. Many client-side market researchers have to negotiate for budget with non-researchers, who often view such studies as nice, but not necessarily actionable. Imposing anonymity on customer feedback reduces the research’s potential for clear, measurable usefulness.
- Second, I have seen raw data from several studies where it was obvious that participants expected follow-up. Indeed, anyone who has done a customer satisfaction survey knows that open-ended questions will often return entries such as, “The last software upgrade didn’t work—can you please fix it?” or “I have called your customer service number twice and can’t reach a live human being!” You can bet that if they take the time to type that into a survey and you don’t follow-up, the damage will be irreparable.
Anonymity in Market Research
Yes, most surveys should be anonymous. But customer satisfaction surveys are an exception. Make it clear at the beginning or end of the survey that respondents can opt out (or opt in, if you prefer) of follow-up. Provide a phone number, web site or email address that can be used for any questions about how responses will be used. The reality is that most customers expect follow-up.
What do you think? Do you agree? Have a different perspective? Please add your comment here or call the blog comments line at 508.691.6004 ext 702.
Want to learn more about customer satisfaction research? Check out the Research Rockstar class here: ClassList.
3 comments
I just had an interesting phone call from someone about this blog post. His concern is that people will get greedy–they will get to follow up for these surveys, and then it will go so well, they will want to do more of this. It’s a risk–and setting the right expectations about this can be tricky!
I’m with Jeffrey on this one. When a customer airs a complaint, grievance or other issue to a research agency (representing the research sponsor), they expect someone to deal with it, not for it to vanish into a black hole. While anonymity is important and we must always ensure we have a customer’s permission to pass their comments back to the client, we should ensure that our research is as actionable as possible. Calling research a ‘customer satisfaction programme’ which seeks only to measure and not to understand or improve satisfaction is a bit of a misnomer.
There are other reasons customer satisfaction research is often not as actionable as it could be, which tend to be linked to a lack of depth and understanding of the data collected as well as the lack of using multiple data sources in the analysis stages (but that’s for another post!). But at least resolving a customer’s issue following a survey is a step in the right direction. And ultimately this should benefit all parties – the research becomes actionable so is better for the client and agency, and the customer gets their complaint resolved. And since we all know happy customers are more likely to be loyal customers, perhaps the questions should be, why aren’t we doing this as standard?
Thanks for this post!
In a recent CS survey for a German mid-size IT provider I added an option to deliberately abstain from anonymity with the effect that survey data would be matched with personalised sales data from the CRM system. The opt-in was placed at the end of the questionnaire and prominently communicated in the invitation and reminder emails.
While the overall response to this option was rather moderate (n=33 out of n=290 respondents), analysis has shown that both the length of verbatims and variances in matrix questions were higher for those abstaining from anonymity. The immediate actionability for the key account team is obvious. Moreover, my untested hypothesis is that the transparent communication of this option in the invitations contributed to the overall trust into the survey.