internal quality monitoring
In late May, the QATC (Quality Assurance, Training and Connection organization) published the results of their quarterly survey on critical quality assurance and training topics in call centers, focusing on quality monitoring call calibration practices. Having worked for a third-party call monitoring company for 8 ½ years, I found the survey results to be quite interesting (sometimes scary), but for very different reasons than highlighted in the QATC report.
1) Quality Monitoring Calibration requirements – According to the survey, 24% of respondents indicated that calibration participants were not required to review calls prior to the call calibration meeting. In these cases, it is a feel-good, group-think exercise and not a true call calibration session. Yikes! Assuming the Quality Assurance team in the call center does not grade every call by committee, such an exercise is ineffective at gauging the degree of disparity that exists within the current call monitoring process. And since disparity is not being measured, the effectiveness of call calibrations cannot be quantified. Result: Waste of time.
An ebook titled Eliminating the Worst Call Center Practice: Quality Monitoring Calibration, is an extraordinary and unprecedented look into one of the most utilized processes in a call center. This ebook exposes a level of ignorance in the call center industry that is so wide-spread it will amaze you.When you read this ebook, you will see why the light bulbs go off in the heads of so many as they connect their struggles with quality monitoring call calibration and the flaws into their call calibration processes.This fact-based case study report is full of real-world insights into quality assurance and call monitoring calibration. Here is a question and answer review of what’s inside. Continue reading “Quality Monitoring Calibration the Worst Call Center Common Practice” »
Call Centers are under continuous scrutiny to validate the millions (in the industry) that are spent on internal quality monitoring (IQM) programs. Call center leaders fight for more resources and senior management asks, “What value are we getting from what we are already spending?” In review of the case study below, you may find the reason for constant scrutiny and how to stop it.
This example is directly from a study Customer Relationship Metrics conducted in an inbound sales and (sister) service group, which had the goal to more closely align the IQM program with the customer experience. Why? Because despite high scores being reported from their IQM process customer complaints, first call resolution performance and customer defection were all on the negative trend. We call this study Quality Monitoring Alignment (QMA).
IQM is a very expensive process that generally consists of supervisors or other designated call monitors (using expensive software) to assess Agent-handled calls and rate them with defined criteria. The criteria may focus on numerous items and the hope is that good scores on the graded calls reflect a good customer experience. If you’re like most, hope “is” the right word here.
However, the customer cares little or none whether the call center Agent tried to cross-sell or whether the required legalese was provided or if the correct policy or procedure was followed. The customer is much more likely to care whether the agent “understood” or “had knowledge” and, in the case of cross-selling, the effort was a relationship enhancer and not a push.
This study yielded several findings of interest, two of special note:
1. We discovered, for example, that most items on the IQM form only allowed for “yes” and “no” responses. For some items this was needed, but for many items it was not. Use of “yes” and “no” seriously reduces the variability of responses and resulted in a “poorer” dataset. This poorer dataset prohibited our ability to exceed customer expectations which created mass mediocrity and a large gap between the internal score and the customers’ score. Changing criteria to a scaled assessment resulted in a “richer” dataset that was more likely to reflect the opportunity to identify where they could exceed customer expectations and how. (Note that this will make all of your previous data non-comparable to that which is accumulated going forward. This is an unavoidable result, but the inconvenience is temporary; over time comparable data will be available for period analysis.)
2. Additionally, we found that all questions on the IQM form that did have a scale were weighted the same. This is a problem because customers view some elements to be more important than others. Don’t you? These calculations for weighting were determined from data collected and analytics being conducted from post-call survey data via an External Monitoring Program (EQM). Then we were able to multiply those weights by the ratings accumulated across all the questions to arrive at a more accurate score.
These best practices yielded an IQM form that:
1. Allows us to better understand why a gap exists between the company and the customer expectations and perceptions.
2. Allows for managing the gap between the customer experience and the company expectation by clearly identify what actions cause the gap to widen and specific actions needed to close the gap.
3. Allows us to control the gap through accurate resource deployment and investment (training, coaching, and systems).
So, the next time you find yourself under scrutiny because of your internal quality monitoring investment just remember “yes” and “no” can be the make or break.
Through real world best practices, part 3 – the final chapter in this three-part series – highlights a few “how to” steps on overcoming barriers and become less of a Pain In The Ass (PITA) to your customers. It begins with four vital questions…
Step 1: Answer some questions.
According to W. Edwards Deming, the father of the quality evolution, “workforces are only responsible for 15% of Continue reading “4 Steps to Overcome Being a Pain in the Ass Call Center” »