In late May, the QATC (Quality Assurance, Training and Connection organization) published the results of their quarterly survey on critical quality assurance and training topics in call centers, focusing on quality monitoring call calibration practices. I found the survey results to be quite interesting (sometimes scary), but for very different reasons than highlighted in the QATC report.
1) Quality Monitoring Calibration requirements – According to the survey, 24% of respondents indicated that calibration participants were not required to review calls prior to the call calibration meeting. In these cases, it is a feel-good, group-think exercise and not a true call calibration session. Yikes! Assuming the Quality Assurance team in the call center does not grade every call by committee, such an exercise is ineffective at gauging the degree of disparity that exists within the current call monitoring process. And since disparity is not being measured, the effectiveness of call calibrations cannot be quantified. Result: Waste of time.
2) Measurement of Quality Assurance calibration effectiveness – As a “numbers person” I was alarmed … and surprised that nearly 40% of survey respondents did not quantify the impact of their internal quality monitoring calibration process. That aside, another 53% of respondents used standard deviation to measure effectiveness, which is fine and good as long as the quality team doesn’t care about QUALITY (only consistency). Standard deviation reflects the “average” dispersion around a central point, the mean. The smaller the standard deviation, the more consistent the measurement. But using this approach (and only this approach) to quantify the impact of call calibration assumes that the mean is correct! I made this mistake (too) many years ago when I became increasingly pleased over a period of months as the call monitoring calibration standard deviation for one of my clients consistently decreased. That was, until I joined one of those call calibration sessions. The quality assurance team had indeed become more consistent …. and WRONG, and lenient in grading call center agent calls. A truly comprehensive call calibration process should measure BOTH consistency of grading and accuracy.
For more information on how to measure both consistency and accuracy, get your complimentary copy of the ebook titled Eliminating the Worst Call Center Practice: Quality Monitoring Calibration, the case study in it will shock you.
3) Number of recorded calls graded – According to the QATC survey, only 1% of survey respondents graded more than 5 calls in their quality monitoring calibration sessions. Another Yikes!
In order to measure not only consistency, but accuracy as well, the call calibration process recommended by Customer Relationship Metrics utilizes a person known as ‘The Standard’ to generate the correct call scoring. The scoring of all members of the Call Center’s Quality Assurance staff is then compared to ‘The Standard’ using a statistic known as Pearson’s Correlation. (For more information, get the Call Calibration ebook). The p-value (the likelihood of getting a more extreme value than the test result when there is no effect in the population) of Pearson’s coefficient is largely contingent upon sample size. Small sample sizes (such as the same sizes of less than 5 calls utilized by a majority of survey participants) provide less precision in the decision of whether an observed difference between ‘The Standard’ and the QA staff is significant or not.
I could go on and on, in fact, I did in the ebook. Here is the fact: Standard Deviation is just fine if “you” want to be average (or don’t really care about getting it right). Keep mind that your “customers” demand that you deliver a higher quality than average.
If you want to learn more about the QATC go to: www.QATC.org
Read an Interview with the author of the ebook titled Eliminating the Worst Call Center Practice: Quality Monitoring Calibration
Learn how to get more samples, more insights, for less costs with your Quality Assurance program.
- Putting Humanity in Contact Centers - July 26, 2017
- Avoiding Pitfalls of Customer Satisfaction Surveys - July 19, 2017
- Why Customer Experience is Like Sex in High School - January 11, 2017
- VoC Execution Gap in Contact Centers is Huge - June 29, 2016
- How long should my contact center survey be? - June 7, 2016
- Stop the Freaking Customer Feedback - April 27, 2016
- What is your Contact Center Top Priority? - April 11, 2016
- Nine words to stop using to describe your quality assurance program - March 10, 2016
- What NOT TO DO with your contact center budget - March 9, 2016
- What to aim for with your Contact Center Budget - February 15, 2016