Internal Quality Monitoring is unable to answer the quality question

/, External Quality Monitoring, Quality Monitoring/Internal Quality Monitoring is unable to answer the quality question

Internal Quality Monitoring is unable to answer the quality question

Share on LinkedIn28Share on Google+1Tweet about this on TwitterShare on Facebook18Pin on Pinterest0Email this to someone

Do you want to know if internal quality monitoring (iQM) scores help you to answer, what was the customer experience? You are continuously asked how well the contact center is serving the customer – how well is service delivered to customers who call to resolve a problem or to ask a question? In many contact centers, they rely on a summary of operational metrics with the assumption that certain metric levels answers this critical question. You most often rely on internal quality monitoring (iQM) scores to answer the question.

If your iQM is like most, you have to conclude that most customers are extremely satisfied by the telephonic service experience. Scores naturally migrate to the upper part of the iQM scoring scale. If you have 100 points available, the majority of your scores are probably 92 or higher, or even 95 and higher – essentially you use the top 10 points on the scale.

When attempting to answer the service quality question, basing such an important assessment on iQM when it has the bias mentioned above diminishes the effectiveness of the response. All other departments within your organization can report on success with numbers that are not questioned. The contact center needs such a response – one that is accepted as valid unlike the internal quality monitoring results.

Let’s review your iQM program and begin the evolution toward providing a better answer. Who is doing the monitoring? Avoid the fox guarding the chicken coop. What items are scored? It’s best to focus your monitoring form on objective issues related to call control, providing the correct response, and effective relationship building criteria. Why shouldn’t the iQM form include customer subjective assessments? Guessing at how the customer perceived the experience is not accurate and contributes to the inflation of the internal quality monitoring scores.

The customer is the best one to answer how their experience went. From a scientific standpoint you should immediately assess the level of service delivered on a particular call. This is External Quality Monitoring (eQM). While this rating appears to be subjective because it is not a hard metric such as ASA or an internal monitoring score related to the effectiveness of the response from the company’s perspective, the customers’ perceptions are the reality that we must deal with in contact centers. If your customers are not satisfied all of those metrics are meaningless. But yet, if you know how the customer graded the call and you have a good set of metrics and iQM scores, the answer of how well your center is performing becomes balanced and valid.

Internal Quality Monitoring is unable to answer the quality questionCustomer Relationship Metrics conducted a research project that provides proof that iQM scores do not equal the callers’ evaluation of the service experience. The iQM form included 17 items, seven of which could be directly compared to the caller evaluations. We examined the iQM and eQM scores over a five-month period. As presented in the table above, there was virtually no relationship at all between the caller evaluation of the experience with the eQM program and the iQM scores. The only statistically significant relationship was related to perceived interest in helping and tone, and this was not a strong relationship.

The results of this research had a dramatic effect on the Quality Assurance Program. The proof from the customers’ perspective that the iQM form was not effective underscored the need to have a valid answer to how well service was delivered. In addition to a better answer, a significant savings was now possible. The original iQM program included 17 items scored per call, 5 per month for 2000 agents. This equated to 170,000 scores given per month, with 4 completed per hour, taking 2,500 hours (not including the feedback time). To complete 2,500 hours of scoring, 17 FTE were used at $45,000 per year for a grand total of $765,000 (again, without feedback and coaching time).

customers-grade-the-callsWith the results of this research, the iQM form was revamped to focus on objective measures. Scoring eight items allowed six to be completed per hour, requiring 12 FTEs at $45,000 per year for a net personnel cost of $540,000. The improvement in the process yielded a savings of $225,000. Your own situation may be on a smaller scale, however the relationship of the direct benefit would apply. Savings from the actual time spent on scoring is compounded by the result of having a more effective definition of quality. Your three part answer needs to include: 1. Call Metrics, 2. Internal Quality Monitoring, and 3. External Quality Monitoring (An Immediate Evaluation by the caller regarding the call combined with Survey Calibration).

About Dr. Jodie Monger

Jodie Monger, Ph.D. is the president of Customer Relationship Metrics and a pioneer in voice of the customer research for the contact center industry. Before creating CRMetrics, she was the founding associate director of Purdue University's Center for Customer-Driven Quality.

View All Posts
  • Rick

    To exclude all bias and have real subjectivity, though expensive, real customer satisfaction is best measured by a 3rd party organization. Their surveys should include “overall comments” section for the customers and the 3rd party company is the one that “slices and dices” the results for you.

  • peter

    interesting take, but i disagree with the thesis that quality should avoid measuring subjective behaviors. actually, i go the exact opposite direction: Quality should embrace subjectivity… because that comes far closer to capturing the essence of the customer interaction, in ways that measuring “Rep Tasks” and objective behaviors never will. (You can protect against the “Guessing” issue by correlating your results; protect against grade inflation by removing quality scores from the rep scorecard). I’ve seen this approach drive tremendous results in rep performance and morale… because it empowers them, while providing feedback that they crave.

    • Peter,

      So in essence you are saying that you can read customers minds. You are also saying the analysts, reps, and supervisors can read the mind of the customer. Of course they can’t. When you add SUBJECTIVITY you are assuming you know how the customer felt. I spent a good CHUNK of my life asking people, “how you know they feel that way?” So how do you? Implement an external quality monitoring measurement and you will know.

      • Peter Beaupre

        I don’t pretend to be able to read their minds. that’s why i lean on correlation metrics to inspect if our Quality staff’s approach to rating calls is predicting success (in customer experience, but also sales, FCR, etc). this is a necessary component to not only quality, but overall performance improvement, for two reasons: customers can’t always articulate WHY they had a good experience; its not their job to reflect upon all call center conversations they’ve ever had, and extract true best practices. (Example: They may think a rep was fantastic, but not realize that it was because the rep mirrored the customer’s pace, language, etc.) Don’t get me wrong, the customer’s opinion is absolutely invaluable (and we measure it, and then correlate QA behaviors to it). The second reason that subjective behaviors (That are Correlated) is critical is that the supervisors need to be able to take this feedback and translate it into actionable coaching. If the sups are unclear on what behaviors truly drive a customer to provide positive feedback, then they will struggle delivering actionable coaching.

        That said, i respect that their are different models that work in different environments, and i have no doubt that your model has had quite a bit of success as well. it probably comes down to the differences in channels (what constitutes success for a channel? Customer experience only? Sales? FCR? Retention? etc…), the differences in corporate culture, etc.

        I love hearing about and debating different models… we may agree to disagree, but I am sure there is value in both for different situations! i think the key is being able to match the right approach for the right organization.

        • Peter,

          Thanks for the contribution. When I interpret what you outline, it seems as if there is a lot of analysis that would not be needed if it was just taken from the customer. That is one main outcomes with the external quality monitoring mindset. I have experienced a minimum 33% reduction in the amount of work quality staff has to conduct when they remove the excessive amounts of subjectivity that requires all of the back-end correlation (or is it causation) analysis. I like things that have a cleaner line of site. It is significantly easier in the interpretation phase. Constituents are more engaged as well when that happens.