Perhaps my favorite thing about reading blogs is that I can have a dialog with the author and fellow readers. Friendly debates or spontaneous collaborations are a lot of fun.
But when comments I share that are “pending moderator review” never appear, it really annoys me.
About 10 days ago, I read an interesting article on TMCnet—a site that I like for technology-related topics. But it just so happened that this article had some important omissions. So I posted a thoughtful reply. Nothing incendiary. Nothing rude. Just a friendly sharing of information with the author and fellow readers.
It never appeared.
After a week, I emailed the editor. Still nothing.
The original article recommends NPS (Net Promoter Score) as the optimal standard for customer satisfaction with telecommunications providers. Ummm, no. So since I didn’t get to share on the TMCnet site, let me share some information here for those of you interested in measuring customer satisfaction in the telecommunications space.
- “There are many scenarios in which customers may be satisfied with certain service levels or offerings yet refrain from recommending or referring the larger offering to their friends.” Yes, this is very true.
- “…customer referrals – should be the ultimate measure of customer satisfaction and should be cultivated to the greatest extent possible.” Not necessarily.
In telecommunications, willingness to refer is not always the best metric. Having done over a hundred research studies on telecomm topics over the past 20+ years, I know that other items can be more relevant. For example, two items that are very important in the telecomm space:
- Willingness to renew (vs. propensity to brand switch). For some service providers, lack of brand loyalty is a huge challenge. And cost of customer acquisition can be quite high. So for them, the most useful metric can be renewal intent.
- Interest in “add-ons” (incremental features/services that would increase $/customer). Again, because the cost of customer acquisition can be high in telecomm, some service providers focus not only on retention but on extensions; how can we sell more to the existing customer base? That’s why in telecomm you often hear people talk about raising ARPU (average revenue per user). And customers’ willingness to buy more says a lot (like how well the proposed add-ons align with their interests, and how far the brand has permission to extend).
Yes, NPS is a wonderfully efficient approach to measuring customer loyalty. But it isn’t the only one. Customer satisfaction and loyalty research is not a one-size-fits all proposition. Telecomm providers need to take the time to identify the best metrics for their research to be truly useful.
[As always, please add a comment or question here, or call the Blog Requests line (508.691.6004). Thanks!]
10 comments
There are a myriad of issues with NPS, and you have certainly covered a few. I would also add that when the goal of much research in telecomm is to retain customers, the meaning of NPS can be very difficult to communicate well to line employees, who are the very individuals with retention as one of their daily responsibilities.
In minimizing churn, better approaches include observing actual behavior and analyzing those trends over time both on a database level (e.g., bill size for the past several months among those who remained versus those who stayed) and through primary research (e.g., number of times the subscriber comparison-shopped in the past three months). Such measures are easier to communicate to line employees, and can get as the essence of reasons for churn and direct management on ways to minimize future churn rates.
I agree with your premise. For more discussion regarding some of the problems with NPS check my blog.
Even The Ultimate Question, the book that brought NPS to a wide audience, says that it doesn’t work in all industries. Just today at the Clarabridge User Conference one hospitality executive whose hotel chain caters to meeting planners said that NPS of guests was not a predictive measure. Another presenter embraced NPS for business software while indicating that actually asking people how many times they had recommended the brand in the past year was a better predictor than the standard willingness to recommend question. Intuit still swears by NPS, despite their own research showing that other measures have greater predictive validity. NPS is a wonderfully elegant model that organizations should test to see if it applies to them, but it very well might be much less useful than other measures. Test, then standardize, I say.
I am not a proponent of NPS being the be-all-and-end-all of Customer satisfaction, exactly as the post above suggests. The largest opposition to NPS coming from the fact that it does not give a direction for action. But neither does some the suggestions outlined above. The suggestion of “Willingness to renew” is probably a finer gradient of the NPS methodology that can differentiate between the passives and the detractors. But what next.
So while RFM metrics (recency, frequency, monetary value) like bill size comparison or interest in add-ons are very good at the operational level, NPS has it’s value in providing a general direction sense as stated above.
I would say, while NPS is a good compass, it cannot be the replacement of a good map.
Rajeev Gambhir
Thanks everyone. I appreciate the thoughtful comments, Usually what I do is test things like “willingness to renew” “willingness to recommend” etc as dependent variables — never as the only variables. You still need, in most cases, to gather additional independent vars that can be tested to see how well they predict the dependent ones. For clients making a long term commitment to customer satisfaction and loyalty research, I always recommend they start with some research to test hypotheses about what drives the desirable outcome. And which outcomes are most meaningful. For example, in telecomm, a client may determine that the most meaningful metric is renewal intent. They may find that 2 primary items are strong predictors (hypothetically, “availability of preferred plan type” and “frequency of experiencing dropped calls”). Once the client has a good handle on the dependent and independent variables that work best, ongoing monitoring can throttle back to a shorter instrument, with occasional sanitty checks (such as in cases where an important market developments has occurred, or where other factors ay have changed the “formula”).
hello all:
My understanding of Net Promoter Score (subtraction of 3’s and below from top two boxes of 4 and 5 – right?) is that it is just a clever way of repackaging a traditional method of finding highs and lows. Maybe new terms are helpful in recognizing what we already know, but succinctly put into a phrase that denotes a concept – like tipping point instead of critical mass? I don’t understand all the fuss. What am I missing? Be it for technology, earth-moving equipment or conditioning shampoo.. thanks, kaf
Thanks Kathryn for your article. I have been working for a Telco operator for a long time, and we used NPS as one indicator of customer satisfaction, but the final indicator was built based on many other criteria. There are a lot of variables involved on customer satisfaction with an operator (customer service, products, promotions, services, billing . . .). Also we have to take into account that interaction with carrier occurs on multichannel platforms, so it is really a complex interaction process and many business units are involved with it.
Thanks Kathryn for your article. I have been working for a Telco operator for a long time, and we used NPS as one indicator of customer satisfaction, but the final indicator was built based on many other criteria. There are a lot of variables involved on customer satisfaction with an operator (customer service, products, promotions, services, billing . . .). Also we have to take into account that interaction with carrier occurs on multichannel platforms, so it is really a complex interaction process and many business units are involved with it
I think Rajeev hit the nail on the head – direction for action. Whichever scoring method is used, an understanding of key drivers behind the score is critical to enable appropriate action to be taken. Some companies use correlation analysis to get to these key drivers but I would contend that the method is flawed as it starts with a hypothesis on what the key drivers might be. Comment analysis is, for me, a much better approach to uncovering key drivers. Also, I would say that given an understanding of the key drivers, every opportunity should be taken to close the loop with customers (either directly on a 1:1 basis or indirectly 1:many via the website).
Good post Kathryn.
Regards, Neil
Hi Kathryn
I reached this blog via the New MR group on LinkedIn. One of our researchers has a view similar to yours in a blog piece she just posted.
Key points that resonate with you…
“It’s clear from our data that even if the link between NPS and future profitability could be proved to work for some industries (and I’m by no means convinced it could be), it is not a strong performance indicator – at least in terms of profitability – when economies are shrinking.”
“I completed a survey recently which I received from my mobile phone provider. It consisted of two questions – one was how likely I’d be to recommend them (the ‘NPS’ question) and one asked me why I’d given that score. But it didn’t ask me how likely I would be to stay with them when my contract is up for renewal, or how I would rate them versus the competition. Or how I rate their value for money. Of course, I don’t know what the objective of the research was, but I can’t help thinking that while knowing what proportion of your customers would recommend you might be interesting, how does that help your bottom line?”
Cheers
Jon @ Virtual Surveys