VoC Worst Practices? When the Metric is King

Author Bio

Author Bio

“I gave you a seven. What? No, I will not make it a nine!”

Earlier this week, a colleague in the Confirmit London office received a phone call from a supplier regarding a survey she’d recently completed about their service. She receives surveys from them regularly and apparently the follow up calls are a common occurrence, which is good. However, the call wasn’t to discuss a comment she’d provided or to seek further insight into her survey responses. It was to ask her to change her score.

Yes. The Account Manager felt the seven she’d provided should be a nine. And he wanted her to re-take the survey to change her response. Now, this colleague is a) not a pushover and b) has hung around with Voice of the Customer experts long enough to know that this is ridiculous for a company trying to get genuine insight into the customer experience. And she told them that in no uncertain terms.

It seems this has happened in the past with the same supplier. Either they ask her to score a nine before the survey is sent, or ask her to change it afterwards. Now, this is the UK, where scoring a nine out of ten is akin to offering to bear someone’s children, but that’s not the point. Clearly the Account Manager in question is accountable, and probably compensated for the scores their customers provide and has taken entirely the wrong approach to achieving the rating they need. In the Account Manager’s defense, this is more likely to be the fault of a company that is focused on chasing the score, rather than on the reasons for the score.

There are two things to consider here. Firstly, we have the issue of the metric being the beginning and end of the VoC process. Ask a question, get a score, add it to the aggregate score. Job done. Except, of course, that is nothing like what businesses should be trying to achieve by asking customers for feedback. The aim should be driving action that helps to deliver better customer experiences. Metrics are a perfectly valid way to gauge where we are as a business, and as a guide to the results of our action, but they are not the goal of the exercise. A score doesn’t in itself MEAN anything and to focus solely on raising or maintaining a metric is to entirely miss the point. The score is the starting point. It is where a VoC program should initiate actions based on customer feedback, and as a result, start to deliver benefits to the customer. Then the score will go up on its own accord, and for the right reason.

Secondly we have the issue of compensating staff on VoC results without their full understanding of what they’re expected to do to achieve those results. Confirmit’s team usually recommend incentivizing on the action that you want to encourage, not directly on the score. For example, if the objective is to use feedback to help make it easier to purchase something online, then incentivize on repeat purchase or frequency of purchase, not just on the score.

Fundamentally, though, employees need to know what a Voice of the Customer program is trying to achieve and their role in helping to deliver that. If the Account Manager had phoned my colleague to ask why she had given a score of seven and to find out how they could improve next time, that would have been great. It would be a lovely example of the closed-loop process we talk about in which businesses listen to customers, understand their experience, take action to improve, and then inform the customer about what was done. However, this chap seemed to feel that just getting the score to a level that suited him was a shortcut to the same result.

At our recent event in London, “Driving Transformational Change”, our VoC expert Claire Sporton explained the importance of taking action, not focusing obsessively on the metric. “The needle won’t move itself” she said. No, it won’t. Action will.

But the action should not be to ask the customer to fake their answers. That way lies madness, not customer experience success.