Show me the money! Part 3 of a 3-Part Series on IRR

///Show me the money! Part 3 of a 3-Part Series on IRR

Show me the money! Part 3 of a 3-Part Series on IRR

Share on LinkedIn1Share on Google+0Tweet about this on TwitterShare on Facebook2Pin on Pinterest0Email this to someone

Based on the other posts on this topic, you’ve probably come to the conclusion that Inter-Rater Reliability (IRR) is a far more rigorous process than the one in which you and your team utilizes and that this process may require more time than you’ve been allocating to calibration.  In many cases, these conclusions are valid.

However, when executed correctly and on an on-going basis, IRR is not a cost, but a process that delivers value (direct and indirect costs) back to the organization by enhancing the accuracy and efficiency of the feedback process to enable long-term performance improvement.

The following scenario plays out weekly, if not daily, in nearly every call center.  Agent Jack receives feedback about his calls that were monitored the previous week.  He is praised for his positive demeanor, “can do attitude” and creative problem solving skills which allowed him to solve customer problems without placing them on hold or transferring them to other departments.  Jack is pleased but confused by the misalignment with the previous week’s assessment which was hardly glowing.  The prior feedback highlighted his incorrect call opening; that his calls were longer than those of his peers and he failed to capture email addresses on 3 of the 5 calls monitored.  To him, the difference did not reflect the performance he remembers.  So, Jack decides to raise this issue with his supervisor as well as the company’s Human Resources department, launching a rather lengthy and time consuming process of research, gap analysis and documentation.

The evaluation disconnect can be dramatically reduced with a process that aligns what is most important to customers and organizational profitability to what your Quality Assurance (QA) team listened for so that everyone on the QA team reflected the same organizational priorities in their monitoring feedback.   How much direct and indirect cost would be saved if your QA team was so consistent in their feedback, hours of time were saved each week because agents no longer perceived inconsistency in the performance feedback they were receiving?  Imagine if instead of researching and documenting gaps in performance assessments your QA and Coaching teams could actually focus on improving performance.  All of these benefits are real outcomes of Inter-Rater Reliability testing.

CTA-quality-assurance-check-upThe example below is based on a call center that supports a customer-base of 350,000.  The average revenue per sale is $85 and the average lifetime value of each of these customers is $4,000.  Given the financial impact of every customer, a 5% improvement in three key areas – customer defection, recommendation rate and repurchase rate – generated an additional $4.8 million in revenue.

  • A 5% improvement (decrease) in the number of customers who defect allows the company to retain an additional 1,102 of their own customers, generating over $4 million based on the lifetime value of a customer.
  • A 5% improvement (increase) in recommendations of the company’s products would yield an additional $53,253 in sales assuming only 5% of the customers who indicated they would recommend the company actually do so.

Click to download Case Study

In this example, we are using a customer-base of 350,000 customers.  71.6% fall into the ‘delight category’ when it comes to ‘likelihood to recommend’ = 250,600 customers.  If only 5% of these people who said they would recommend actually do and convert it to a sale, there will be 12,530 new sales.  To convert this into revenue figures, multiply 12,530 recommendations by the estimated revenue per sale ($85) to yield $1,065,050 generated through recommendations.

In order determine the revenue increase generated by a 5% improvement in recommendation rate, find number of additional customers that would now fall into the ‘delight category’ and once again assume that only 5% will actually make a recommendation to generate a sale.  Now there are 13,157 customers (instead of the baseline of 12,530).  Multiply 13,157 customers by the estimated revenue per sale ($85) to arrive at $1,118,303.  Subtracting the baseline figure of $1,065,050, the difference is $53,253 additional revenue resulting from a 5% improvement in likelihood to recommend.

The same logic applies to the next calculation.

  • A 5% improvement (increase) in future (re)purchases of the company’s products would yield an additional $369,123 in sales assuming only 35% of the customers who indicated they would make another purchase from the company actually do so.

Attaining a 5% improvement across all call center agents is attainable within a single year with the proper investment of staff and time.  This company supports its 350,000 customers with approximately 300 full-time call center agents.  These agents are managed by five full time QA analysts and two performance coaches (at an annual cost of $250,000) as well as a team of call center supervisors.  Whether achieving a 5% improvement in call center performance requires one or two years (the timeframe required by a majority of Customer Relationship Metrics’ business partners), the ROI exceeds 800%.  However, it is important to note that once the improvement is achieved, ongoing investment in staff (QAs and coaches) will be needed to maintain performance and/or continue driving performance improvements.

About Jim Rembach

Jim Rembach is a panel expert with the Customer Experience Professionals Association (CXPA) and an SVP for Customer Relationship Metrics (CRM). Jim spent many years in contact center operations and leverages this to help others. He is a certified Emotional Intelligence (EQ) practitioner and frequently quoted industry expert. Call Jim at 336-288-8226 if you need help with customer-centric enhancements.

Visit My Website
View All Posts
By | 2016-12-05T15:15:16+00:00 November 18th, 2010|Inter-Rater Reliability (IRR)|0 Comments