“Is your post-call IVR survey missing conditional skip patterns?” is one of the questions that are included in the 25 Mistakes to Avoid with Post-call IVR Surveys eBook and self-assessment. The effort to compile what works, based on scientific evidence, is to eliminate dysfunctional practices that are prevalent in the industry and help you to get amazing results.
Since inventing post-call IVR surveying for contact centers 20 years ago, I have been on a crusade to end survey malpractice and make lives better for those that are committed to service excellence. Come join me in this crusade by commenting on articles and sharing them with your fellow customer experience and contact center colleagues.
Skip patterns in your contact center surveys is a must
Communication is sometimes a lost art form that I am reminded about at least a few times a week when I ask my daughter if she had a good day at school. Technically, it’s a yes-or-no question but depending on the answer, I need to ask some follow-up questions to better understand why it was a good day or why it was not a good day. The same type of communication needs to happen between your contact center agents and your customers. The art of asking diagnostic questions yields a more positive and productive service experience.
Since we know this to be true, would it make sense to field a post-call IVR survey that enables the same type of diagnostic questioning?
Think about how a survey can present a group of questions to all callers that is pretty inflexible. “Was your issue resolved on the call today, yes or no?” can be ascertained pretty easily but you need to have the ability to segment the group of responders into those with a problem versus question or whether the issue was resolved or not and then to ask diagnostic questions based on the response.
Most surveys ask the same questions for every product/service line. How is it possible to understand service experience differences by product type if you have to ask the same questions to all customers? Your contact center agents cannot service your customers without using diagnostic questions, so why would you accept the constraint that your survey does not have the ability to skip certain questions and branch to those which do apply?
Your contact center agents could not collect valuable insight with a linear path of questioning and your survey cannot collect valuable insight with this constraint either. For a survey to collect valuable insight the data collection framework has to contain the most effective research principles possible. If the system you use (or are to use) is an inflexible solution, your results will be difficult to interpret and unable to generate the valuable insights you would require. It’s possible the results will even be inaccurate. Do you think it’s a small error in measurement to force customers down a path where they are treated the same? Let’s just say that you know it doesn’t work for your service delivery, so there is no way it can work for your survey.
- Where Are You on The Spectrum of Agent Performance - July 27, 2017
- How many things should be measured on my Quality Monitoring Form? - May 17, 2017
- Best Practices for your Quality Monitoring Form - May 12, 2017
- What is the best scale for customer satisfaction surveys? - May 8, 2017
- How to take action with Call Center Analytics - May 1, 2017
- How many calls should agents handle in an hour? - April 19, 2017
- You are Doing First Call Resolution Wrong - March 31, 2017
- For People on the Verge of Tripping on the self-service Line - December 6, 2016
- Justin Robbins CCDemo interview takes me back to Kindergarten - November 4, 2016
- How many chat sessions can agents handle? - September 9, 2016