What are the best practices for quality monitoring forms why does it matter? Contact Centers are under continuous scrutiny to validate the millions (in the industry) that are spent on quality assurance programs. Contact center leaders fight for more resources and senior management asks, “What value are we getting from what we are already spending?” In review of the case study below, you may find the reason for constant scrutiny to be valid and how to stop it.
This example is directly from a study Customer Relationship Metrics conducted in an inbound sales and (sister) service group, which had the goal to more closely align their internal Quality Monitoring (iQM) program with their customers’ evaluation of the service experience (eQM- external Quality Monitoring) program. Why? Because despite high scores being reported from their iQM process customer complaints, first call resolution performance and customer defection were all on the negative trend. In essence, they implement a new strategy that we call Impact Quality Assurance (iQA).
Internal Quality Monitoring is a very labor-intensive (expensive) process that generally consists of supervisors or other designated call monitors (using software) to assess agent-handled calls and grading them with defined criteria on quality monitoring forms. The criteria may focus on numerous items and the hope is that good scores on the graded calls reflect a good customer experience. If you’re like most, hope “is” the right word here.
However, the customer cares little or none whether the contact center agent tried to cross-sell or whether the required legalese was provided or if the correct policy or procedure was followed. The customer is much more likely to care whether the agent “understood” or “had knowledge” and, in the case of cross-selling, the effort was a relationship enhancer and not a push.
This study yielded several findings of interest, two of special note:
1. Avoid overuse of “Yes” and “No” criteria: We discovered, for example, that most items on their internal quality monitoring forms only allowed for “yes” and “no” responses. For some items this was needed, but for many items it was not. Use of “yes” and “no” seriously reduced the variability of responses and resulted in a “poorer” dataset. This poorer dataset prohibited our ability to exceed customer expectations which created mass mediocrity and a large gap between the internal score and the customers’ score. Changing criteria (where appropriate) to a scaled assessment on the quality monitoring form resulted in a “richer” dataset that allowed for a greater opportunity to identify where agents (and the company) could exceed customer expectations and how. (Note that this will make all of your previous data non-comparable to that which is accumulated going forward. This is an unavoidable result, but the inconvenience is temporary; over time comparable data will be available for period analysis.)
2. Give the more important more weight: Additionally, we found that all questions on their internal quality monitoring forms, that did have a scale were weighted the same. This is a problem because customers view some elements to be more important than others. Don’t you? These calculations for weighting were determined from data collected and analytics being conducted from post-call survey data via an external Monitoring Program (eQM). Then we were able to multiply those weights by the ratings accumulated across all the questions on the quality monitoring form to arrive at a more accurate score.
1. Allowed us to better understand why a gap exists between the company and the customer expectations and perceptions.
2. Allowed for managing the gap between the customer experience and the company expectation by clearly identify what actions cause the gap to widen and specific actions needed to close the gap.
3. Allowed us to better control the gap through accurate resource deployment and investments (training, coaching, and systems).
So, the next time you find yourself under scrutiny because of your quality assurance expenses just remember your internal quality monitoring forms could be blocking you from getting more value and performance with your quality assurance investments.
- What can contact centers learn from Tom Brady’s balls? - May 13, 2015
- Customer Service Leaders are Ready to Invest in these Solutions - April 30, 2015
- 5 Ways to Show Customers Love and Get it in Return - February 12, 2015
- Unearthing Time to Coach Contact Center Agents - January 28, 2015
- How to Improve Agent Performance without the Ding - January 21, 2015
- How to Implement a Lean Quality Assurance Program - January 1, 2015
- Why FCR is more powerful than a genie in a bottle - December 22, 2014
- Contact Center Budget Wars: New Armor to Defend Against Cuts - October 16, 2014
- Neither Qualitative NOR Quantitative VoC Data Works - October 6, 2014
- Quality investments are like Granite Countertops - July 24, 2014