Most VoC programs utilize only quantitative measures. This is not particularly surprising. Researchers and analysts tend to take a great many statistics classes and, as such, specialize in the manipulation of numbers. But quantitative voice of the customer data is not enough to be successful with customer experience management.
It is rare to find a customer experience professional that is conversant in non-quantitative VoC data methods. Secondly qualitative methods are somewhat messier, more time consuming, and often more difficult to coax “proof” from since most people understand “proof” as a number. Yet the basic fact is unassailable: qualitative methods have the power to teach what you do not already know.
Note that successful researchers do more than report on data; they are proactive with their data collection and start out with research hypotheses. That is, they look to prove what they expect to find. This is indeed the way scientific research is conducted all over the world. But what if the research hypothesis is wrong or let’s just says not totally right? How can you proceed from here?
Well, if qualitative measures were included in the data capture, even a small number, then there is hope of discovering the next direction that should be pursued. Moreover, customers make errors in recording their answers or they sometimes feel that there is not a place to list their actual concerns. Providing qualitative data capture can correct for these as well.
So, you need both kinds of data. What works is having quantitative AND qualitative data. This is how you get a holistic view of the customer experience. One or the other is not enough in isolation. Each has its place. You need quantitative collection for the score of the experience and need qualitative for the why of the score.
Quantitative data is usually the scaled item and can easily be analyzed by means, charts, and statistical manipulation, and, the focus is on the numbers. This type of analysis in isolation can be misleading. Qualitative data provides flexibility in that customers can explain their scores and why they gave the score that they did to the appropriate item. Qualitative data allows the direct voice of the customer to come through the process that can be inflexible.
VoC Data: Clean or Dirty?
The ills of dirty data have been well complained of in computer and technology magazines for many years. Dirty data and the need to scrub it is all the more widely discussed now due to the trend in data-driven decision-making (D3M), sometimes called data mining, in education (Mercurius, 2005) as well as business and many other venues.
Dirty data can simply be defined as errors in the data, however they may have accumulated. We most often hear about them in reference to database input errors, and we tend see solutions to the dirty data problem that rely on quality control at the front end of processes (such as during data input) or the purchase and use of special software designed to detect and repair certain types of errors.
However VoC data errors is not as easily controlled when only quantitative VoC data capture is conducted with your customer population. Therefore, back end data scrubbing is necessary and highly effective in eliminating some VoC data errors.
Remember that one of the aims of a voice of the customer program is to hold employees accountable for their respective (or owned) performance. There are certain outcomes that are critically important to both leaders and front-line employees such as incentives, bonuses, praise and recognition, and those outcomes stream from the results of our measurement programs whether the data are dirty or clean. Note that employees will scrutinize their performance appraisals for fault if their VoC scores are less than expected. Since the outcomes are important, it behooves us to do all that is feasibly possible to ensure the cleanliness of the data.
One of the most important tasks to perform, and most often overlooked processes of a voice of the customer program that is designed to hold employees accountable for their performance, is to have a back-end quality control process. I call this Survey Calibration.
Some examples will help to clarify what can be corrected and the impact those corrections can have.
“The agent that helped me with my problem was very helpful however, the first one that answered my phone call in the very beginning put me on hold for over 5 minutes twice and never gave me any kind of an answer. Then she got aggravated when I asked to be transferred.”
If not performing Survey Calibration, and simply presenting the results from your VoC program, this survey would be assigned to the last one to handle the call. But note that the scores from this survey were based on the performance of the first agent, and were very negative. Based on the customer’s explanation, this survey should be removed from the second agent and attached to the rightful owner. Anything less than this due diligence creates “noise” associated with your VoC program. This will prevent employees from focusing on the customer experience.
“The comments on this survey were not for this last representative, but for the initial rep I dealt with. I was attempting to get her to cancel some orders and when I told her to transfer me, she hung-up on me.”
Once again, if you were not performing a Survey calibration process, this survey would be assigned to the last representative (as the last one to handle the call, maybe the one spoken to when the customer called back). The scores from the survey would have been attached to the wrong representative.
“Please make the first 3 questions 8’s. I made a mistake on grading and didn’t catch it until it was too late. I want to make sure Cheryl gets credit for a good job.
In this survey we see a very common mistake made by customers. Mis-coding is a common dirty data issue, but Survey Calibration is designed to catch this and see that the customer’s true voice is heard.
Survey Calibration is mandated for your results to be believable and defendable. I have made it my mission to review every customer comment for my clients. If I did not, people would be held accountable for poor performance or get the praise that is due some one else. To me, that is unjust and unfair.
Also, when the results of these errors go uncorrected it could anger every employee in your company. When people can trace down even one instance of inappropriately credited negative feedback, it provides ammunition for arguments. Success or failure of your VoC program is contingent on quality and should never create dissatisfaction for customers OR employees.
Because analyzing qualitative data is not “easy”, there could be a tendency to skip the back-end data cleaning and simply say that the first pass is “close enough.” But is it? To what extent does the Survey Calibration process affect scores? On average, between 4% and 5% of the surveys we collect need to be corrected.
Maybe this doesn’t sound like much. But when you consider the use made of the results and what dirty data may do to your processes and people, it is 4% or 5% too much.
In order to show the actual impact of dirty data on the scores, I tracked the Survey Calibration process for one contact center with 110 agents and analyzed its 600 surveys. The results are shown in the chart. Imagine if you had used the dirty data for decision making or to hold people accountable? Better yet, which numbers do you want to be held accountable for?
In this instance, customers had much more satisfaction with their experience than initially suspected, both for the company and the agent. There was higher first contact resolution than was originally reported as well. Think about the impact this has on your VoC program ROI. Surely you are considering ROI of VoC. Regardless what is clear is that with dirty data, your ROI is lower and the credibility of the results could be called into question. Just as importantly, your job is to manage to the actual voice of the customer not errors.
This article sourced from: Survey Pain Relief: Transforming Customer Insights into Action
This article is dedicated to CXDay and is part of the CXDay Blog Carnival.
- Where Are You on The Spectrum of Agent Performance - July 27, 2017
- How many things should be measured on my Quality Monitoring Form? - May 17, 2017
- Best Practices for your Quality Monitoring Form - May 12, 2017
- What is the best scale for customer satisfaction surveys? - May 8, 2017
- How to take action with Call Center Analytics - May 1, 2017
- How many calls should agents handle in an hour? - April 19, 2017
- You are Doing First Call Resolution Wrong - March 31, 2017
- For People on the Verge of Tripping on the self-service Line - December 6, 2016
- Justin Robbins CCDemo interview takes me back to Kindergarten - November 4, 2016
- How many chat sessions can agents handle? - September 9, 2016