Best Practices for your Quality Monitoring Form

/, Call Center Quality, Quality Assurance, Quality Monitoring/Best Practices for your Quality Monitoring Form

Best Practices for your Quality Monitoring Form

Share on LinkedIn2Share on Google+0Tweet about this on TwitterShare on Facebook0Pin on Pinterest0Email this to someone

bad monitoring practices in call centersConsider that best practices for your quality monitoring form is entirely different from what you have been standing behind. It is for most call centers. This is why the average time to redo monitoring forms is every two years.

When quality monitoring forms are updated people merely make slight adjustments instead of implement the best practices. What call centers most often possess are common practices, not the best.

With the amount of scrutiny that your call center is under each time the budget season rolls around – if not every month – you must have an ironclad way to defend the million(s) that you spend for a quality assurance program. If you are always influx with your quality measurement that is an impossible task.

“What call centers most often possess are common practices, not the best.” Click to Tweet

What are you Worth?

It’s fact that people are the largest cost in your call center. You must have some way to prove the worth of those agents and staff to senior management. When they ask you “What value are we getting from what we are already spending?”, the answer may not come as easily as you need.

In review of the case study below, you may see your center and further understand the reason for constant scrutiny. And see the path to ending it.

It’s not an uncommon to expect internal quality monitoring (iQM) scores to mirror customer evaluations of the experience. Why wouldn’t the monitoring team be effective at scoring the caller’s experience on a call? Don’t they score five (or more) calls per agent per month? Seems like they should know and be able to report to your leadership how much value is being generated by your center.

What does the customer say?

But, that is not usually the case and it’s not very surprising when you think about an effective external quality monitoring (eQM) program that uses a post-call survey.

The following discussion comes from a study conducted in an inbound sales and (sister) service group. The center’s goal was to more closely align their internal Quality Monitoring (iQM) program with their customers’ evaluation of the service experience (eQM- external Quality Monitoring) program.

The iQM scores were consistently high but customer complaints, First Call Resolution performance, and customer defection as measured by Net Promoter Score (NPS) were all on the negative trend when quantified by the eQM process. They wanted reality to be more like the iQM scores and not so much like the real evaluations of the callers via the eQM process.

A new (best practice) guide

In short, they needed new guiding principles that we call Impact Quality Assurance (iQA).

Here’s where you may resemble the call center in the case study. Internal Quality Monitoring is a very labor-intensive (and expensive) process that consists of supervisors or other designated call monitors to assess agent-handled calls by grading them with defined criteria on quality monitoring forms. The hope is that good scores on the graded calls reflect a good customer experience. If you’re like most, hope is what you have because proof is grossly lacking.

Understand the customer perspective

Your customer cares very little about whether your agent tried to up-sell, whether the required legalese was provided, if the correct policy was cited, or the defined procedure was followed. The customer cares whether the agent understood the issue, had knowledge to be an advisor, conveyed with confidence and treated the caller as a valued customer. All aspects of the call that must not be guessed at by the call monitoring person. Call monitoring people must stop guessing about what the customer felt. To do so is negligence.

“Call monitoring people must stop guessing about what the customer felt.” Click to Tweet

In addition to analysis to prove that iQM teams cannot effectively rate the callers’ perception of service on the call are two common (not best practices) items to note:

  1. Too many “Yes” and “No” criteria:Most items on the internal quality monitoring form I review only allowed for “yes” and “no” responses. The agent either got the points or did not. For some items this does make sense, but for most items the lack of “partial credit” via a scale of success caused scores to be inflated in an effort to not be unfair to the agent who somewhat satisfied the item. The dichotomous scale (using “yes” versus “no”) seriously reduced the variability of responses and resulted in less valuable dataset. This constrained dataset that tended to garner YES check marks and therefore all of the points for the time pushed the iQA score higher than was the real perception by the caller. Changing criteria to a scaled assessment on the quality monitoring form resulted in a “richer” dataset that allowed for a greater opportunity to identify where agents (and the company) could exceed customer expectations and how. (Note that this will make all of your previous data non-comparable to that which is accumulated going forward. This is an unavoidable result, but the inconvenience is temporary; over time comparable data will be available for period analysis. The resulting benefit will far outweigh the cost.)
  1. Value what is more important: Additionally, I find that all questions on their internal quality monitoring forms were given the same value in the final score. It is important to use the caller evaluations to quantify what is important to the caller. Your monitoring form may have a guess at the weighted values but unless analysis of the customer experience was used to determine, you cannot effectively defend the components of your form and the resulting score. Essentially, analytics from the post-call survey of an external Monitoring Program(eQM) determine what is important and the monitoring form should score agent behavior that influences those perceptions by the callers. You do not take the list from the eQM analysis and put it onto the iQM form. Remember, you shouldn’t be guessing at the callers’ perception but rather scoring how effective the agent is at influencing the perception.

These best practices yielded internal quality monitoring forms that:

  1. Allowed us to identify the reason for a gap that exists between the company and the customer scores.
  2. Allowed for managing the gap between the customer experience and the company expectation by clearly identify what actions cause the gap to widen and specific actions needed to close the gap.
  3. Allowed us to better control the gap through accurate resource deployment and investments (training, coaching, and systems).

End the common practices (they’re bad)

So the next time you find yourself under scrutiny because of your quality assurance expenses, please remember that your internal quality monitoring forms are probably blocking you from getting more value and performance with your quality assurance investments.

It’s time you stop following the lead of everyone else. It’s time to increase your analytic skills and take what has been shared here to begin real best practices. The customer and competition require you to do better with quality assurance. And your company requires you to obtain greater value from your quality assurance spending. If you don’t, stop calling what you have best practices.

“The customer and competition require you to do better with quality assurance.” Click to Tweet

What do you do now?

If this formation above is totally clear for you to move forward, then you are all set. If the information above has caused confusion or does not give you what you need then you should register for a complimentary consultation. We’ll review your form and give you feedback on what’s preventing you from success.

About Dr. Jodie Monger

Jodie Monger, Ph.D. is the president of Customer Relationship Metrics and a pioneer in voice of the customer research for the contact center industry. Before creating CRMetrics, she was the founding associate director of Purdue University’s Center for Customer-Driven Quality.

View All Posts