Avoid Call Center Schizophrenia from Pay for Performance – Part 2 of a 2-Part Blog Series

/, Call Center Training, For External Relationships, For Internal Relationships/Avoid Call Center Schizophrenia from Pay for Performance – Part 2 of a 2-Part Blog Series

Avoid Call Center Schizophrenia from Pay for Performance – Part 2 of a 2-Part Blog Series

Share on LinkedIn4Share on Google+0Tweet about this on TwitterShare on Facebook3Pin on Pinterest0Email this to someone

Whenever I have an opportunity to visit a business partner’s call center, I take a few minutes to conduct a rather un-scientific test, call it morbid curiosity.  As I pass by cubicles and am introduced to call center staff, I always ask how agent performance is assessed.  To me, the variety of responses I hear speaks volumes and perhaps helps explain the responses I get from call center agents when I pose the exact same question. Typical responses are a shrug of the shoulders, a shaking of the head and a quick glance to co-workers for reinforcement.  They don’t know, feel they cannot explain the complexities or simply don’t remember.

customers-grade-the-callsCall center agents are expected to know and retain more and more information in today’s complex business environments.  Unfortunately, our short-term and working memory capacities have not increased to accommodate this environment.  Agents are also expected to generate customer perceptions that the service was excellent while managing the call to the operational metrics.  Talk about feeling committed…to an asylum!

If you want your agents to feel less like they NEED to be committed but rather be committed to the customer experience, keep in mind and act in accordance with the very basics of their job expectations.  And be clear about those expectations.  One business partner I visited earlier in 2010 had their KPIs updated in real-time on dozens of flat screens in the call center.  Another business partner created banners as a colorful reminder of the quarter’s call center initiatives.  Great ideas because the agents understood why the KPIs were important, what impact they individually have on them and how they benefit from effectively performing to these.  Don’t assume this is the case without testing your assumption.

A balanced scorecard can serve as a visual cue for agent success.  Well-designed balanced scorecards are typically made up of four parts:

1.  Metrics by which performance will be assessed,

2.  Performance objectives for each metric,

3.  Weighting applied to each metric (an indication of relative importance), and

4.  An individual agent’s performance on each metric.

Selecting performance metrics

Traditionally, call centers have managed performance based on the goal of operational efficiency.  We see this drive for efficiency continue today through the management of call center agents to metrics like average handle time, number of calls handled, after call work, etc.  While these are very important metrics, the fallacy is in managing a call center to only these internally-focused metrics.  Customers do not care how much time an agent has to spend filling out paper-work or electronic forms after the call ends.  Rather they care that an agent is available within a reasonable amount of time when they call.   Customers care even less about how long they need to spend on the phone with a (single) call center agent, as long as the problem has been resolved with that call.  A 30-minute call might end with a delighted customer, a frazzled call center manager and a very confused agent.  Are you seeing the disposition toward schizophrenia now?

The key to success is in selecting a variety of metrics that speak to the customer experience and balancing them with the business need for efficiency (we will speak more about this balance when we talk about scorecard weighting).  Best-in-class business partners also incorporate other data sources into their agent scorecards such as internal quality monitoring data, chat, text, SMS and email data, etc.

Setting performance objectives

We’ve all been in situations where a goal was picked out of thin-air by a well-intentioned executive and then carved into stone for us to follow.  In the absence of this scenario, the best set goals are based on actual historical performance.  Important elements to consider in setting performance goals are:

  • Mean or Median? (measures of central tendency)- In order to set a goal for future performance, we must first have an understanding of how we’ve performed in the past.  Measures of central tendency indicate the point on a performance continuum where the members of a group or dataset tend to gather.  While the mean (often referred to as the average) is more widely-reported in call centers, it is most useful in groups whose performance is relatively normal (normal from a statistical standpoint, that is).  A normal distribution is one in which a majority of group member performance is centered around the middle of the performance continuum and the distribution of performance is perfectly symmetrical to the right and left– in short, a bell curve.  Unfortunately, this type of distribution is not typical of call center performance.  As such, the median (the point at which half of the group’s members fall above and below) may be a better way to determine how the call center “typically” performs on any given metric.
  • Time frame of historical data– Having decided whether the mean or median will be the most appropriate statistic for determining a baselineof past performance, we must now define a time frame to represent history.  At bare minimum, Customer Relationship Metrics recommends that at least three months of data be used to minimize the impact of anomalies in performance and non-normative events impacting performance.  Ideally, a larger time frame would be used which encompasses all stages of a company’s business cycle or seasons (1 year).  The danger in using more than a single year of historical data to establish a performance baseline is the possibility of negating or underplaying recent performance gains – essentially making the performance goal too easy to achieve.
  • Predicting the future – Once a historical baseline of performance has been established, the same data set can then be used to make predictions about future performance (statistical modeling).  Performance objectives can then be based around those predictions.  Some business partners have also found some success if applying a 5% to 7% “lift” to historical performance and using that lift as the performance goal for the following year.   

Metric weighting

The weighting applied to each metric on a scorecard indicates its relative importance to the call center and to the larger organization.  Before arbitrarily applying weighting or points to each metric, think about the organizational goals that have been set for the fiscal year and the ways in which the call center contributes to these goals.  Doing so will help you make the first critical decision – whether to focus on the customer’s experience or on organizational costs.  Weighting within each category of metrics (operational vs. customer experience, etc.) can then be determined based on the degree of impact each metrics has on the category outcome (ex: issue resolution has a higher impact on customer experience than courtesy, so issue resolution should have a higher point or weighting allocation associated with it). 

Individual agent performance

If one of your goals in implementing a balanced agent scorecard is to keep agents informed about their performance and incite healthy competition, ensuring that your agents have ready access to accurate scorecards will be a key determinant in the success of the initiative.

During one of my recent visits to a business partner, I took my usual walk through the call center and was quite pleased to see the number of agents who were logged in to Customer Relationship Metrics’ MPM real-time agent scorecards.  MPM (Metrics Performance Manager) is a reporting tool that Customer Relationship Metrics uses as part of our applied business intelligence services to gather data from disparate sources.  We’ve found that one of the outputs of this reporting tool that can be very motivating to agents are the scorecards.  In this call center, agents were actively managing their own performance and receiving immediate feedback from the system about the changes they were making to their interactions with customers.  Feedback from the ACD about their efficiency, feedback from customer satisfaction surveys, feedback from the Quality Assurance team are all in a single location, updated in real-time.  Imagine the burden you would remove from your supervisors if your agents were that tuned-in to their own performance!

About Jim Rembach

Jim Rembach is a panel expert with the Customer Experience Professionals Association (CXPA) and an SVP for Customer Relationship Metrics (CRM). Jim spent many years in contact center operations and leverages this to help others. He is a certified Emotional Intelligence (EQ) practitioner and frequently quoted industry expert. Call Jim at 336-288-8226 if you need help with customer-centric enhancements.

Visit My Website
View All Posts
  • Chuck Udzinski

    Hi Carmitt: Just finished reading your posts and found it to be very interesting. We started communicating departmental goals and expectations on a monthly basis two years ago and the results have been astounding. Last year we raised our NPS 15% points due in a large part to a more focused approach as to what each agent was expected to do.

    We’re trying to introduce such a scorecard to our team however the one stunbling block is the heavy lifting that needs to be done manually to get the results.

    I’m guessing we haven’t purchased MPM otherwise we would be using it by now. We’re in a tough spot as far as purchases for 2010 but I would like to learn more about MPM in case I can work it in for 2011.

    • Good afternoon Chuck and thank you for your post!

      I know that creating and updating agent report cards manually is a big time drain (according to an article published in the Harvard Business Review [1] 90% of managers spend their time on ineffective tasks– among them the reporting task you describe), but the fact that you are seeing such a dramatic improvement in your NPS Score indicates that 1) the high-level work we did on the agent scorecard appears to be targeting the right agent behaviors (for now) and 2) that you and your team are effectively coaching, training and holding people accountable for results.

      MPM (Metrics Performance Manager) is an upgrade to the VoCPM (Voice of the Customer Performance Manager) analytic tool you are using today. It allows me to review the business more holistically due to its ability to pull in multiple data sources. For scorecards, it would eliminate any manual scorecards your team is creating by hand, and provide near real-time reporting. Jim will be onsite on the 20th, I will make sure he puts this on his agenda.
      Thanks again for your post,
      Carmit DiAndrea

      1.Bruch, H. & Ghoshal, S. (2002). Beware the. Harvard Business Review, pp 62-69. Retrieved 10 August 2010

  • Jay Hammans

    Outstanding piece, Carmit! I’m consistently amazed at how many call centers are managed with the “shrug and nudge” approach. Unfortunately, I often see organizations where the current methodology of performance management, no matter how dysfunctional, tends to be ingrained both in Executive Management and in the entire Operations Management culture. How does Customer Relationship Metrics work through this thought barrier and overcome the resistance to change?

    Thanks again for a great piece of writing!


  • Hello neighbor and thank you for your post!

    We use a three-phased methodology in working with our business partners, that is loosely based on the work of Dr. Robert Cialdini. The discovery work we do with clients before ever engaging with them helps us identify their key triggers, but the methodology looks like this:
    1. Let the data speak for itself
    2. Comparison to others (benchmarking)
    3. Loss-based analysis

    Given that my role is largely focused on letting the data speak for itself, here are a few examples. When Customer Relationship Metrics is engaged with a business partner for whom we’re collecting customer experience surveys, we let the customer data speak for itself. As compelling as I would like to think I make data seem, at the end of the day the customers are the ones who vote with their dollars, making them the most important and persuasive voice to executive management. It’s hard to argue with a comment like this one from a customer ”In 2001, I closed all my accounts and lines of credits with you. I told my employees that if any of them continue to do business with your bank, they can consider themselves terminated. I will never do business with you again.”

    A few (or few dozen) comments along these lines and it becomes very hard to assert that your organization’s approach is to customer service is the right one.

    In situations where our engagement focuses on business intelligence (and customer experience data may not exist), we once again rely on data to debunk an organization’s notions about service, management, business practices, etc. It is difficult to argue with data that indicates that allowing a call to extend beyond the 7 minute handle time “guideline” results in the elimination of XX% of calls, that the same call handled by vendor A as opposed to vendor B takes XX second less and results in a XX% higher rate of resolution, that agents waste XX% of their time assisting customers with a rebate process the organization purposely made difficult.

    I hope this begins to address your question Jay!

    Thanks again for your thought-provoking post,
    Carmit DiAndrea

  • Pingback: Avoid Call Center Schizophrenia from Pay for Performance – Part 2 of a 2-Part Blog Series « Customer Relationship Metrics' Blog()