studies on quality monitoring
Hi, my name is Susan. I’ve worked in the Customer Service Department for my company for just short of 20 years now and I absolutely love most everything about it. I’m one of the top ranked agents in the department and, according to the masses, I work for one of the best managers in the whole company. But lately I think I’ve experienced agent burnout because I have begun to think about doing something different.
Normally I keep to myself and just go with the flow when it comes to what management wants me to do from day to day. However, as I get older I feel I need to address the elephant in the room – aka our QA program. I’m sure the process was created with good intentions; however, it seems to have outlived its usefulness, and my tolerance. Please let me share with you my perception and let me know what you think.
In our Contact Center
Each manager is responsible for listening to and evaluating 5 calls per month for every agent on their team (sometimes that doesn’t happen). The calls are evaluated based on a checklist of items and I don’t really know how the criteria was created. It seems as if someone, or some committee, decided at some point along the way that the items being measured would make a call a good one.
While I understand that the intended result was to be able to say if the agent did these X number of things, then the customer would have left the call satisfied. And if agents would collect all of the necessary information from the customer, they would be happy. But it isn’t really that simple.
In addition to the basic items on our call monitoring form, like obtaining the customer’s name, address and email, there are several items that are subjective. They require managers to make assumptions on how the customer feels about the call.
For example, one of the scoring items on the evaluation form asks to rate whether or not the agent treated the caller as a valued customer. It doesn’t feel right to ask any manager (or agent for that matter) to evaluate how the customer would answer that question.
Agent burnout builds
We all know that each of us has different definitions of what would make us feel valued. Secondly, if three managers reviewed the same call, they would not all answer that question the same. On items like this it’s almost impossible to obtain consistency across the managers. Why does the form have questions with no right answer? These results affect the performance rating and compensations of other people and it just doesn’t feel right.
Another question on our monitoring form that is always a hot debate for other agents is the question that asks if the customer received a fair resolution to their situation. When a manager has to explain the rating they gave with, “I know I would be…” that doesn’t feel right either.
Asking managers to pretend that they are the customer and to evaluate agents on behalf of the customers feels wrong. Some managers, instead, apply the company slant and define fair as whether or not the agent followed procedure.
After being divorced twice, I have learned that attempting to interpret someone’s feelings based on tone and assumptions doesn’t work.
I still can’t get over the fact that my performance rating and compensation are affected by this guessing game?!?!?
Voice of the Customer?
There was an attempt a few years ago to use our voice of the customer program to measure the customer perception, but that did not work.
Customer feedback was collected as much as two weeks after we helped them in the contact center. The comments customers provided often spoke of not remembering the agent contact. Or they talked about several agent contacts and we couldn’t tell who should own the feedback.
Now they are testing a post-call survey that collects customer feedback immediately. I am very concerned though because the survey only asks to evaluate agents and no customer comments are collected.
It feels like our company wants us to be responsible for product and policy problems. We can’t do anything about that. I feel like I am in a hopeless situation. I want to improve but it just feels like we are being dinged for everything.
It didn’t use to be like this. In the past few years our executive leaders have talked more about customer retention and loyalty. They talk more about the importance of the work we do in the contact center but it seems like we are just being scrutinized more. We do not feel very valued. Agent burnout is spreading like wildfire.
Who is Susan?
Susan is a mixture of many stories we have heard from contact center agents over the past 20 years. Over this time we’ve spoken to many agents to get the agents’ view about the company’s quality assurance program. We also have team members that have managed in call centers.
Because of stories like Susan’s we included the question, “Would your agents say they are satisfied with your quality assurance program?” in the self-assessment 29 Mistakes to Avoid with Quality Assurance Programs.
Susan’s story is a very real one in the contact center industry. And the time has come to transform our traditional quality assurance practices before we lose thousands like Susan from our talented agent ranks.
How do you fix it?
It’s very likely the issues mentioned in Susan’s story can be found in your center. We have been told time and time again from contact centers across the country that agent moral is falling and turnover is rising. Agents seem to understand the need for a quality assurance program and most of them want them. But only if it feels right.
This is why we developed the Impact Quality Assurance (iQA) model. It is the gold standard in quality assurance for customer centric companies. The evaluation components of the methodology utilizes internal quality monitoring (iQM) that is streamlined to address only the key points that are needed to make an effective call and eliminates all of the role-playing, hypothesizing questions that are unfair to agents.
Layered with effective iQM is an external quality monitoring (eQM) program that utilizes a scientifically based post-call survey that asks for the customers’ insight and opinion about their company, agent and resolution experience. This, along with the customer comments, yields actionable data that can be leveraged to make sound business decisions.
The two-layer approach, with an effective iQM and eQM process, is what enables companies to take their level of service to the next level and uncovers pain points for not only the customers, but also the agents. Once that information is identified, you can make the appropriate changes in processes and technology to mitigate or eliminate the pain completely.
So I ask you, would your agents say they are satisfied with your current quality assurance program? (Question 22 in this self-assessment) Or will you need to find a replacement for Susan?
In late May, the QATC (Quality Assurance, Training and Connection organization) published the results of their quarterly survey on critical quality assurance and training topics in call centers, focusing on quality monitoring call calibration practices. I found the survey results to be quite interesting (sometimes scary), but for very different reasons than highlighted in the QATC report.
1) Quality Monitoring Calibration requirements – According to the survey, 24% of respondents indicated that calibration participants were not required to review calls prior to the call calibration meeting. In these cases, it is a feel-good, group-think exercise and not a true call calibration session. Yikes! Assuming the Quality Assurance team in the call center does not grade every call by committee, such an exercise is ineffective at gauging the degree of disparity that exists within the current call monitoring process. And since disparity is not being measured, the effectiveness of call calibrations cannot be quantified. Result: Waste of time.