“Do you agree automating the process for callers to participate in post-call IVR surveys will prevent agents from cheating?” is a question that was included in the 25 Mistakes to Avoid with Post-call IVR Surveys eBook and self-assessment. The eBook and self-assessment includes diagnostic questions for you to uncover problems in your program that I have come across since inventing post-call IVR surveying in contact centers 20 years ago. Many of the misunderstandings have become false truths, and this question is part of a topic that is widely misunderstood.
Why is this a problem?
Automated is not the same as fool-proof. Dictionary.com defines automated: to apply the principles of automation to a mechanical process, industry, office, etc. Nowhere in that definition does it state that doing so will create a perfect, fool-proof process. In this case, automated means customers answering ‘yes’ they would like to participate in a survey when prompted by the IVR at the start of the call will then be transferred to complete the survey once the agent disconnects. Sounds fool-proof and simple enough, right? Not so much.
Remember, the system is designed to transfer the customer after the agent disconnects from the call. That means that the agent has to hang up before the customer for the system to be able to work. Think about the call that just ended – the customer is not pleased that they can’t get what they want. They have just spent 10 minutes asking for something five different ways only to find out that their request simply isn’t feasible. That agent can tell just by the way that the call has gone that it is unlikely that the customer is going to give them a top box score in anything. It is reasonable to suspect that this agent may not allow someone to complete a survey that they feel pretty confident will only add to that pile of lackluster scores. So the agent decides to stay on the line until the customer hangs up. What? You didn’t think the agents would figure this out.
Let’s also not forget about the accidental cheat. This agent doesn’t intentionally prevent customers from completing the survey; they are simply providing stellar customer service.
Agents could unintentionally create a bias in the survey data simply by doing what has been ingrained in them as part of the fundamentals of customer service – never hang up on a customer. If an agent has been in the customer service industry for any length of time they have heard this statement more times than they can remember. How are they now supposed to violate that fundamental rule and hang up on a customer? Since the agent has no idea if the customer has agreed to participate in the post-call survey, they would essentially have to hang up on every customer. Yikes! Some agents will be able to adapt; however, many will struggle and struggle at inconsistent rates.
THINK ABOUT THE CUSTOMER
If the customer is hung up on they may call back and ask to be connected to the survey so their feedback can be submitted but now the survey is connected to a different agent (if that call was selected for the survey).
What if the customer selects “no” I do not want to participate and changes their mind? This one could go either way. The customer may have become irritated during the call and then decided they wanted to participate after all. Now they must call back to participate and the survey gets attached to the wrong agent (again if that second call was selected for the survey). What if they were wowed and changed their mind? If they call back, then it still gets attached to the wrong agent.
What if the customer forgets? Relying on customer memory can be a risky proposition. If the calls require interactive dialogue between the customer and the agents (most do) the customer can easily forget to hang on the line to participate. If they do call back, again, wrong agent assignment.
You may say don’t assign it to any agent. Case studies prove that would be a very bad decision. Review the ebook Improving First Contact Resolution with Agent Accountibilities.
The solution here is simple – remember that there is no fool-proof method that will prevent agents from cheating or creating bias. You need to think about what customers do as well. Regardless of whether we use an automated, semi-automated or manual methodology, we employ a stringent, consistent survey calibration process on every survey collected to insure that every survey is attached to the correct agent and that the feedback should be owned by the agent, and you need to do this as well. Every method has strengths and weaknesses. We design customer experience Voice of the Customer measurement programs to be the best possible research study given the technical make up of a contact center. Success is obtained through acknowledging them and building them into your design plan.
- Where Are You on The Spectrum of Agent Performance - July 27, 2017
- How many things should be measured on my Quality Monitoring Form? - May 17, 2017
- Best Practices for your Quality Monitoring Form - May 12, 2017
- What is the best scale for customer satisfaction surveys? - May 8, 2017
- How to take action with Call Center Analytics - May 1, 2017
- How many calls should agents handle in an hour? - April 19, 2017
- You are Doing First Call Resolution Wrong - March 31, 2017
- For People on the Verge of Tripping on the self-service Line - December 6, 2016
- Justin Robbins CCDemo interview takes me back to Kindergarten - November 4, 2016
- How many chat sessions can agents handle? - September 9, 2016