internal quality monitoring
Do you want to deliver better customer experiences in your contact centers? Duh, of course you do. I don’t think anyone in their right mind would say otherwise. To deliver better customer experiences, contact center quality must transform at a more dramatic rate. Too many practices are built on the exact same premise as they were back in the 1980’s. I have compiled five articles in the best in contact center quality excellence this year that can help you start to move forward on your contact center quality excellence journey. Remember, your quality program can either enable or disable your ability to improve the customer experience.
The truth is, the customer experience is a large impact on the value of your contact center. And the quality items you measure, manage and report metrics on greatly influence the result. Having a problem with empathy? Quality Assurance is driving that. Have a problem with FCR? Yep, QA. Have a problem with agent morale? Hey, it’s QA. We all know that “what gets measured, gets managed.” Well, there is a downside to that as well. “You get, what you get.” Continue reading
In late May, the QATC (Quality Assurance, Training and Connection organization) published the results of their quarterly survey on critical quality assurance and training topics in call centers, focusing on quality monitoring call calibration practices. I found the survey results to be quite interesting (sometimes scary), but for very different reasons than highlighted in the QATC report.
1) Quality Monitoring Calibration requirements – According to the survey, 24% of respondents indicated that calibration participants were not required to review calls prior to the call calibration meeting. In these cases, it is a feel-good, group-think exercise and not a true call calibration session. Yikes! Assuming the Quality Assurance team in the call center does not grade every call by committee, such an exercise is ineffective at gauging the degree of disparity that exists within the current call monitoring process. And since disparity is not being measured, the effectiveness of call calibrations cannot be quantified. Result: Waste of time.
An ebook titled Eliminating the Worst Call Center Practice: Quality Monitoring Calibration, is an extraordinary and unprecedented look into one of the most utilized processes in a call center. This ebook exposes a level of ignorance in the call center industry that is so wide-spread it will amaze you.When you read this ebook, you will see why the light bulbs go off in the heads of so many as they connect their struggles with quality monitoring call calibration and the flaws into their call calibration processes.This fact-based case study report is full of real-world insights into quality assurance and call monitoring calibration. Here is a question and answer review of what’s inside. Continue reading
What are the best practices for quality monitoring forms why does it matter? Contact Centers are under continuous scrutiny to validate the millions (in the industry) that are spent on quality assurance programs. Contact center leaders fight for more resources and senior management asks, “What value are we getting from what we are already spending?” In review of the case study below, you may find the reason for constant scrutiny to be valid and how to stop it.
This example is directly from a study Customer Relationship Metrics conducted in an inbound sales and (sister) service group, which had the goal to more closely align their internal Quality Monitoring (iQM) program with their customers’ evaluation of the service experience (eQM- external Quality Monitoring) program. Why? Because despite high scores being reported from their iQM process customer complaints, first call resolution performance and customer defection were all on the negative trend. In essence, they implement a new strategy that we call Impact Quality Assurance (iQA).
Internal Quality Monitoring is a very labor-intensive (expensive) process that generally consists of supervisors or other designated call monitors (using software) to assess agent-handled calls and grading them with defined criteria on quality monitoring forms. The criteria may focus on numerous items and the hope is that good scores on the graded calls reflect a good customer experience. If you’re like most, hope “is” the right word here.
However, the customer cares little or none whether the contact center agent tried to cross-sell or whether the required legalese was provided or if the correct policy or procedure was followed. The customer is much more likely to care whether the agent “understood” or “had knowledge” and, in the case of cross-selling, the effort was a relationship enhancer and not a push.
This study yielded several findings of interest, two of special note:
1. Avoid overuse of “Yes” and “No” criteria: We discovered, for example, that most items on their internal quality monitoring forms only allowed for “yes” and “no” responses. For some items this was needed, but for many items it was not. Use of “yes” and “no” seriously reduced the variability of responses and resulted in a “poorer” dataset. This poorer dataset prohibited our ability to exceed customer expectations which created mass mediocrity and a large gap between the internal score and the customers’ score. Changing criteria (where appropriate) to a scaled assessment on the quality monitoring form resulted in a “richer” dataset that allowed for a greater opportunity to identify where agents (and the company) could exceed customer expectations and how. (Note that this will make all of your previous data non-comparable to that which is accumulated going forward. This is an unavoidable result, but the inconvenience is temporary; over time comparable data will be available for period analysis.)
2. Give the more important more weight: Additionally, we found that all questions on their internal quality monitoring forms, that did have a scale were weighted the same. This is a problem because customers view some elements to be more important than others. Don’t you? These calculations for weighting were determined from data collected and analytics being conducted from post-call survey data via an external Monitoring Program (eQM). Then we were able to multiply those weights by the ratings accumulated across all the questions on the quality monitoring form to arrive at a more accurate score.
1. Allowed us to better understand why a gap exists between the company and the customer expectations and perceptions.
2. Allowed for managing the gap between the customer experience and the company expectation by clearly identify what actions cause the gap to widen and specific actions needed to close the gap.
3. Allowed us to better control the gap through accurate resource deployment and investments (training, coaching, and systems).
So, the next time you find yourself under scrutiny because of your quality assurance expenses just remember your internal quality monitoring forms could be blocking you from getting more value and performance with your quality assurance investments.
Through real world best practices, part 3 – the final chapter in this three-part series – highlights a few “how to” steps on overcoming barriers and become less of a Pain In The Ass (PITA) to your customers. It begins with four vital questions…
Step 1: Answer some questions.
According to W. Edwards Deming, the father of the quality evolution, “workforces are only responsible for 15% of Continue reading