Do you have reasons that cause you to throw out surveys?

/, IVR Post-call Surveys, Voice of the Customer/Do you have reasons that cause you to throw out surveys?

Do you have reasons that cause you to throw out surveys?

Share on LinkedIn36Share on Google+0Tweet about this on TwitterShare on Facebook0Pin on Pinterest0Email this to someone

“Do you have a list of reasons that cause you to throw out surveys?” is a question that was included in the 25 Mistakes to Avoid with Post-call IVR Surveys eBook and self-assessment. There are many barriers to success for your customer experience measurement program and it’s helpful to avoid the common things we have seen over the years. The eBook and self-assessment includes diagnostic questions to uncover issues I have come across since inventing post-call IVR surveying in contact centers almost 20 years ago.

Why is this a problem?

When you ask the customer to evaluate your service you are telling the customer that ‘we value your opinion’. If you genuinely value their opinion, why would you throw out their opinion just because it wasn’t what you wanted to hear? Doesn’t that mean you really don’t value their opinion at all?

Take, for example, a utility company that asks the customers to evaluate their service yet throws out any evaluations that they collect during an outage. Why wouldn’t this company want to know about performance during challenging times? It’s easy to work with customers who are happy, but handling customers that are upset because their power is out is definitely more challenging. Don’t you want to know how your representatives deal with these customers? If the survey is designed properly, the agent will not be held responsible for the outage and by implementing a “rule” to exclude data, the incorrect assumption exists that the agents do not do well. How can you defend your people if you don’t measure continuously? What guidelines will you use to improve the processes during outages if you don’t ask the customers how they are served?

Another example of a “rule” to exclude is for an agent transferring a customer to the survey, however the customer provides low scores based on the representative that they spoke to prior to being transferred to this agent. Why would this survey be deleted but instead just identify this during the Survey Calibration process and use the comments to reassign it to the representative that the customer stated they were evaluating? Wouldn’t that be fair for all parties involved? If your Voice of the Customer program doesn’t include the Survey Calibration process, you need a better customer experience measurement process.

I once heard of a contact center that threw out the highest and lowest score when calculating their agent customer satisfaction scorecards? What supporting data did you use to decide that the very satisfied and very dissatisfied customers’ opinions didn’t matter? What message do you think that is sending to contact center agents? They should only care about ‘some’ customers? Are you telling them that you only value the opinions of your ambivalent customers? Or why would you delete feedback that focuses on your product or service feeling that the agent is not responsible for representing the company? Certainly the agent is responsible for some aspect of how the customer feels about the product (as defined by the Survey Calibration process).

Contact Center Surveying Best Practices ebook

The Solution

The purpose of conducting a voice of the customer program should be to create a differentiated customer experience through service. If part of your service includes system outages and various other challenges as part of operations, then you need to know if or how that level of customer service differs from less challenging times. Utilizing a Survey Calibration process will not only prevent you from having to entirely throw out responses that don’t correspond to the representative that they are assigned to; it will also instill confidence among the agents because they will know that the evaluations are earned. If you can’t tell your employees with 100% confidence that their customer satisfaction scores are not another agent’s, how do you expect them to trust the process?

Having a well-designed voice of the customer program will eliminate the need to remove the highest and lowest scores when creating contact center agent scorecards and sending the wrong message since Survey Calibration has already determined, without a doubt, that the scores belong to that particular agent there is no need to exclude anyone’s evaluation of the service they received just because they stated they were delighted or completely dissatisfied with the level of service. At the end of the day you either value the customers’ opinion, or you don’t. There is no middle ground.

About Vicki Nihart

Six Sigma certified and more than 15 years experience in contact center operations. Vicki leverages contact center operations subject matter expertise and deep analytic capabilities to support clients and build relationships. Analyzing and interpreting data is a passion she converts into purpose.

View All Posts