Why you should not use survey findings to make operational and strategic decisions

///Why you should not use survey findings to make operational and strategic decisions

Why you should not use survey findings to make operational and strategic decisions

Share on LinkedIn41Share on Google+0Tweet about this on TwitterShare on Facebook0Pin on Pinterest0Email this to someone

“Are you afraid to use your post-call IVR survey findings to make operational and strategic decisions?” is one of the questions that were posed in the 25 Mistakes to Avoid with Post-call IVR Surveys eBook and self-assessment. The eBook and self-assessment includes diagnostic questions to uncover many of the problems that Customer Relationship Metrics have come across since inventing post-call IVR surveying in contact centers almost 20 years ago.

Why is this a problem?

The purpose of asking customers about their experience and collecting mountains of data every year is to help you to improve and to grow your business, right? Leveraging the feedback that the customers provide can be the best way to identify and quickly remedy process breakdowns as well as capture process improvement ideas. Who better to ask where the pains lie than the end user?

One of the biggest mistakes organizations make is by not asking the right questions. Often when companies implement a post-call survey, they get so focused on keeping it short that they don’t take the time to ask themselves what they genuinely need to learn. For example, if the survey is so short that it only asks ‘was the agent courteous?’ and ‘would you recommend this service to a friend?’, what are you really going to learn by asking? More importantly, how will you leverage the responses to help improve and/or grow the business? What happens when the percentage of responses to one of those questions begins to shift heavily towards ‘no’? Where are you going to turn in order to determine why they are responding with no? How many man-hours are you going to waste trying to guess about the cause of the issue? So in this example, you definitely do not want to use the findings to make operational and strategic decisions.

Other companies miss the mark in a different way. They take the time to develop sound questions; however, they implement it in a way that the data can’t be defended. For example, a customer calls in and speaks with Suzie who is rather rough and not very helpful, and then gets transferred to Johnny who resolves her issue. When the customer completes a survey and gives poor ratings and states in the comments that Johnny was great but the scores reflect their opinion of Suzie, how will you handle that survey? Since Johnny transferred the customer to the survey, the low scores will be credited to him. Will you move the survey results from Johnny to Suzie? Will you throw it out?  What if Suzie’s average scores are high and Johnny’s are low because Suzie transfers most of her calls to Johnny to handle because she can’t? How will you go about addressing this? When Suzie disputes any disciplinary action, claiming the customers don’t have any issues with her because her high survey scores say so, how are you going to dispute that? How many hours are you going to have to waste trying to defend your stand? So in this example you definitely do not want to use the findings to make operational and strategic decisions.Best Practices in Contact Center Surveying

The Solution

Back in 2006, we introduced the term “survey malpractice” which is definitely something that you want to avoid. Do not give your company faulty information based on inadequate research methods. Malpractice is a tough word as it implies professional incompetence through negligence, ignorance, or intent. Unfortunately, many contact center leaders are not skilled at the science and art of measuring the customer experience. By definition, an ineffective measurement program generates errors from negligence, ignorance and/or intentional wrongdoing. We know that contact center leaders do not intentionally do wrong and avoid negligence.

So that means the solution is to gain knowledge and overcome your ignorance about customer experience measurement. First of all, know that in order for customer insights to be actionable, the data needs to be complete and clean. Take the time to ask yourself not only what you need to learn from the customer, but also for what you will be able to use that information.

Next, know that to get clean data you must conduct Survey Calibration. This will insure the integrity of the data and will allow you to not only make operational and strategic decisions, but put you in a position to defend your decisions.

Of course, there is significantly more to cover like use of scale, analytics, bias, and much more. That is why we invested so much in helping to educate by creating the eBook mentioned above and several more contact center resources in our library.

About Vicki Nihart

Six Sigma certified and more than 15 years experience in contact center operations. Vicki leverages contact center operations subject matter expertise and deep analytic capabilities to support clients and build relationships. Analyzing and interpreting data is a passion she converts into purpose.

View All Posts
By | 2016-12-05T15:14:45+00:00 October 9th, 2013|Survey Calibration|1 Comment
  • There are so many variables at play when it comes to effectively monitoring and grading your call center agents. A manual approach is often flawed because it’s impossible to measure every variable. That’s why speech analytics software is useful because many things can be monitored at once.