As the economy has contracted over the two years, many organizations have focused on minimizing costs by reducing (if not eliminating) on-going training, quality initiatives, hiring, promotion, etc. The impact has been a decline in employee engagement which had a direct and measurable impact on the way they treat your most valuable asset – your customers.
According to the Gallup Q12 employee-engagement survey, the following questions represent the largest drivers of employee engagement (correlating to employee productivity, customer loyalty, bottom-line growth:
- Do I know what is expected of me at work?
- Do I have the materials and equipment I need to do my work right?
- At work, do I have the opportunity to do what I do best every day?
- In the last seven days, have I received recognition or praise for doing good work?
- Does my supervisor, or someone at work, seem to care about me as a person?
- Is there someone at work who encourages my development?
- At work, do my opinions seem to count?
- Does the mission/purpose of my company make me feel my job is important?
- Are my co-workers committed to doing quality work?
- Do I have a best friend at work?
- In the last six months, has someone at work talked to me about my progress?
- This last year, have I had opportunities at work to learn and grow?
Your quality assurance team, and more specifically, your calibration process has direct impacts on the questions which appear in bold above (questions 1, 3, 4, 6, 11, and 12). How do you think Agent Joe feels when QA Jerry tells him he did great on his last call and QA Ben gives him a mediocre call monitoring score the very next week on a nearly identical call? This scenario of inconsistency is exactly what kills employee engagement, along with the credibility of your Quality Assurance team.
I am not suggesting that you eliminate calibration because consistency and accuracy are vital components of any high functioning team. But in order to (quantifiably) attain high levels of accuracy and consistency within your Quality Assurance team, you leap over the current calibration process to a more rigorous and measurable process. The risk from inadequate calibration is substantial to the organization and to you as a leader. Reduce risk by engaging in what we at Customer Relationship Metrics (CRM) call Inter-Rater Reliability (IRR).
In the next post of this series I will explain the difference between calibration and IRR. But in the meantime, I’ll leave you with some food for thought:
- A highly tenured QA team migrating conducting their very first Inter-Rater Reliability test yielded a 32.32% failure rate in accuracy of call scoring. Such a failure rate is hardly unusual.
- This same team reached agreement (consistency) in call scoring only 56.5% of the time (and this is WITH well-defined scoring instructions).
- The area in which the least consistency in scoring existed was in skills collectively called customer service or soft skills. These skills are key in driving a positive customer experience and brand loyalty.
- This highly tenured team is made up of only four Quality Assurance agents. Our research has shown that the larger the QA team, the higher the initial failures in accuracy and consistency.
- Putting Humanity in Contact Centers - July 26, 2017
- Avoiding Pitfalls of Customer Satisfaction Surveys - July 19, 2017
- Why Customer Experience is Like Sex in High School - January 11, 2017
- VoC Execution Gap in Contact Centers is Huge - June 29, 2016
- How long should my contact center survey be? - June 7, 2016
- Stop the Freaking Customer Feedback - April 27, 2016
- What is your Contact Center Top Priority? - April 11, 2016
- Nine words to stop using to describe your quality assurance program - March 10, 2016
- What NOT TO DO with your contact center budget - March 9, 2016
- What to aim for with your Contact Center Budget - February 15, 2016