Now that we’ve talked about the “Naughty List,” let’s move on to the “Nice List” of call center practices I’ve seen this year. Put simply, these are practices that deliver value to the customer while positively impacting the company’s bottom line.
1. Putting customers in the driver’s seat – According to our research to date in 2010, 65% of all customers who called the organizations they do business did so because they perceived a problem. And when they call, they have anxiety – anxiety that you (company) won’t stand behind your product/service, that you’ll charge them exorbitant amounts to do XYZ, that you won’t help them. World-class agents recognize this anxiety and allow customers an opportunity to voice all of those fears. And then they offer their customers options – yes options, so the customer can select the outcome that is most pleasing to them. This may fly in the face of all of the conventional wisdom that says agents should be consultative and make recommendations, but the fact is that agents know very little about the needs of customers they serve. Rather than assaulting customers with a barrage of generic questions designed to uncover needs the customer may not have even yet identified, let the customer choose. The fact that as an organization, you were (seemingly) willing to do whatever the customer wanted to reconcile the situation will leave a far more lasting impression than how the issue was ultimately resolved. Continue reading
When our CEO says, “I can’t see the whole picture.” I know she is feeling a lot of anxiety about making the wrong decision. An incorrect decision made today is much worse and harder to recover from than a wrong decision just a few years ago. Every call center is operating more lean and the loss of anything (time, money, people) from a wrong decision can be disastrous. So the insights you depend upon must be comprehensive and cannot fail you.
Picture this: I recently was experiencing slow speeds with my DSL connection at home. After rebooting my modem, router and PC, I called my service provider because the connection was still slow. Oh, how painful it was waiting (and waiting) for web pages to load. Continue reading
This holiday season I find myself thankful for the many gifts in my life – family, friends (new & old), health, joy, talented co-workers and a slightly wicked sense of humor. As I sat down to write my holiday gift giving list, I started thinking about who’s been naughty and nice in my life. Children all across the world know exactly what gets them a pile of coal in their stocking.
Having visited dozens and dozens of call centers, I often wonder how that simple distinction between a good idea and a bad one can get so lost in the midst of so many good intentions. In that spirit, I’m revealing my list of naughty and nice call center practices. We’ll start with the naughty! These are practices that if employed in your call center should be re-evaluated so that next year you can make Call Center Santa’s other list. Continue reading
Who doesn’t love call-ahead seating at restaurants to guarantee immediate seating upon arrival? The concept that restaurants created a service allowing me to be expected at the hostess station and be quickly taken to a table tells me that the restaurant understands the value of a dining experience and, by proxy, that they value my business. A true enhancement to the customer experience! Living my life in the call center space is a customer-service-oriented blessing (or curse depending on the day) that directs me to appreciate gestures that value me as a customer. A fundamental principle that makes me use this service is rooted in the fact that I trust it to work. I trust that when I make the call and then drive to the location, I will be on the list.
So, when I was sitting in a recent meeting with a business partner listening to a discussion about the implementation of virtual queuing (also known as automated callback service), I was reminded of my call-ahead seating option. The call center management team had many reasons to be interested in virtual queues, from enhancing the customer experience to making more effective use of their human capital. They took the time to present each reason during the meeting (with supporting charts). Continue reading
What’s the difference between calibration and Inter-Rater Reliability? Part 2 of a 3-Part Series on IRR
In my 14 years in the call center industry, I have had many occasions to visit call centers in nearly every industry imaginable. I’ve come across different examples of calibration, each intended to reduce risk to the organization from customer service:
- A group of Quality Assurance (QA) folks sitting in a room listening to calls and then discussing them,
- A group of agents sharing their opinions with QAs on how they think their calls should be graded,
- QAs and agents debating the attributes that separate a good call from an excellent call, from a mediocre or bad call,
- A lead QA, manager, trainer, consultant or client instructing members of the QA team on how to evaluate calls.
- A lead QA, manager or trainer playing examples of (pre-selected) good and bad calls.
While these may be common call center practices, they are far from best practices. In order to drive long-term improvement in the consistency and accuracy of your QA team, the outcome of any calibration process must be quantifiable, repeatable and actionable.
Inter-Rater Reliability (IRR) versus Calibration
Inter-rater reliability studies are more than structured or unstructured conversations. IRR studies demand a rigorous approach to quantitative measurement. IRR studies require that an adequate number of calls be monitored, given the size of the Quality Assurance team, variability in scoring, the complexity of calls, complexity of the monitoring form, etc. Inter-rater reliability testing also requires that call scoring be completed individually (in seclusion if possible). While discussion is key in reducing scoring variability within any Quality Assurance team, scoring and discussion of scoring variations must become separate activities which are conducted at different points in time.
Inter-Rate Reliability testing aims to answer two key questions:
1. “How consistent are we in scoring calls?” and,
2. “Are we evaluating calls in the right way?”
In other words, certainly the goal of IRR is to ensure that each member of the Quality Assurance staff is grading calls consistently with his / her peers. However, a high degree of consistency between the members of the Quality Assurance Staff does not necessarily ensure that calls are being scored correctly, in view of organizational goals and objectives. A further step is needed to ensure that call scoring is conducted with reverence to brand image, organizational goals, corporate objectives, etc. This step requires that a member of the management team take part in each IRR study, acting as the standard of proper scoring for each call.
Efforts to attain high degrees of inter-rater reliability are necessary to ensure fairness to your agents whose calls are being evaluated. Your agents deserve to know, with a high level of confidence, that their monitored calls will be scored consistently, no matter which member of the Quality Assurance team scores them. And they need to know that they are scored well. Without valid and reliable methods of evaluating rep performance, you risk making bad decisions because you are basing them on faulty data; you risk lowering the morale of your agents through your very efforts to improve it; you open yourself to possible lawsuits for wrongful termination or discriminatory promotion and reward practices. You, too, need to know that your quality monitoring scores give reliable insight about the performance of your call center and about the performance of your agents on any individual call.
Sample Reports from Inter-Rater Reliability Study
Based on the figures above, it is very clear that the members of the QA team are relatively equal in scoring accuracy (defect rate) but that QA#1 struggles to accurately score in an area that is critical not only to the internal perception of agent performance but to the customer experience as well (auto-fail questions). QA#1 also tends to be the most consistent in his / her with the remaining members of the team (correlation). From a call perspective, it is clear that calls 6 and 10 included scenarios or situations that were difficult for the QA team to accurately assess. Improving upon the current standing may mean redefining what qualifies as excellent, good or poor, adding exemptions or special circumstances to the scoring guidelines or simply better adherence to the scoring guidelines that already exist.
Does your calibration process deliver results that are this quantifiable and specific?
A few tips from our Inter-Rater Reliability Standard Operating Procedure:
1. Include in your IRR studies any individual who may monitor and provide feedback to agents on calls, regardless of their title or department.
2. Each IRR should include scoring by an individual outside of the QA team who has responsibility for call quality as well as visibility to how the call center fits with larger organization objectives.
3. Make sure each IRR includes a sufficient sample size – 10 calls at minimum!
Do not mistreat those who are testing your products and services! Who are the ones that will rapidly identify deficiencies? You know that the answer is….your customers. They will be the first to tell you about it. Your challenge is to HEAR what is being said beyond the numbers of customer satisfaction scores, Net Promoter Scores and call resolution performance. Listen to the key metrics but HEAR the definition of ‘why’ your customers are feeling such pains. The ‘why’ is uncovered in the analytics of the opened ended responses collected in post-call surveys.
If you are not systematically analyzing customer comments to quickly distinguish real issues from random noise, you are not HEARING the risk to your organization. The call center may not be the owner of risk, or be responsible for its response, but the call center is responsible to be the ears of the organization.
Our business partners seek the ‘knuggets’ of wisdom in their real-time post call surveys, revealing true customer pains that shape the definition of risk to the organization. The survey calibration process helps them not only discover the problem but to craft the solution. With our business intelligence team, predictive analysis is used to prevent future occurrences of potential issues resulting in the ever-important brand protection. Risk to your brand can be mitigated by separating random noise from real issues:
“I have not received my gift card so I can purchase a new phone. Therefore, I had to pay cash out of pocket. The card was supposedly shipped out of the 6th of this month. Today is the 26th. It’s been far too long. Apparently they have to send out a check now. Now all my bills are backed up because of it. I have the potential of being homeless, and therefore losing my job. Thank you.”
“I bought a microwave that is extremely loud and noisy. In proportion to space, I’d liken it to a jet in your garage.”
“The guys they sent out the last two times were a bunch of knuckleheads. They couldn’t even flush a toilet. He told me that my (ABC product) was not something that he had any experience with but spent two hours looking at it anyway only to determine that he didn’t know what to do. The people they sent out this third time finally took care of the problem. As far as being reliable and trustworthy as you all say you are, I wouldn’t agree at all with that anymore. Ok, bye”
“This is almost ridiculous. The ice maker isn’t working on this specific unit. It takes a phone call to you to call a service representative that tells me he’ll be there within a week or two. Then he shows up to tell me he has to order parts. After he orders the parts, it then will take a week or two for him to return. If you think this is very efficient, I’d like to know what kind cigarettes you’re smoking.”
We all expect product warranties. Manufacturers manage the risk of having warranties. Often times, the two philosophies do not line up. The delivery of the warranty service is often not even maintained by the manufactures but through a separate organization. Many times the competing interests of the two organizations leave the customer in a bad place. ‘Knuggets’ like the ones below highlight this problem. The ‘knuggets’ were used to develop new processes to alleviate the problem with a solution that is fair to the customer and still profitable for the company.
“I’m trying to get my TV repaired. Normally this store and their warranty would get a 10. But with all the problems from this on a scale of 1 to 10, I’d give a 0, because they’re giving me the runaround. My husband is dying and I have to be on this stupid telephone. Thank you.”
“I’m dissatisfied with the service of the extended warranty. A mouse chewed through our cord. I was told this was customer abuse.”
“I called your extended warranty customer service center to get my dishwasher fixed because it’s spewing water all over my floor. I found out that there are no servicers in my area. I live in Maryland for goodness sake, you know, near Baltimore and Washington DC, not Timbuktu! Why did you sell me this product with a warranty that cannot be used and will be voided if I get someone who isn’t on your list to fix my product? I can tell you this, I won’t be buying another one of your products again. Honestly, I’d throw this in the Bay if I could carry it. “
“Be honest and just say that you won’t stand behind your product instead of making us think you will and try to get you to and then leave the call being mad as heck. Just be honest and label the darn things “buy at your own risk.”
Whenever I have an opportunity to visit a business partner’s call center, I take a few minutes to conduct a rather un-scientific test, call it morbid curiosity. As I pass by cubicles and am introduced to call center staff, I always ask how agent performance is assessed. To me, the variety of responses I hear speaks volumes and perhaps helps explain the responses I get from call center agents when I pose the exact same question. Typical responses are a shrug of the shoulders, a shaking of the head and a quick glance to co-workers for reinforcement. They don’t know, feel they cannot explain the complexities or simply don’t remember.
Call center agents are expected to know and retain more and more information in today’s complex business environments. Unfortunately, our short-term and working memory capacities have not increased to accommodate this environment. Agents are also expected to generate customer perceptions that the service was excellent while managing the call to the operational metrics. Talk about feeling committed…to an asylum!
If you want your agents to feel less like they NEED to be committed but rather be committed to the customer experience, keep in mind and act in accordance with the very basics of their job expectations. And be clear about those expectations. One business partner I visited earlier in 2010 had their KPIs updated in real-time on dozens of flat screens in the call center. Another business partner created banners as a colorful reminder of the quarter’s call center initiatives. Great ideas because the agents understood why the KPIs were important, what impact they individually have on them and how they benefit from effectively performing to these. Don’t assume this is the case without testing your assumption.
A balanced scorecard can serve as a visual cue for agent success. Well-designed balanced scorecards are typically made up of four parts:
1. Metrics by which performance will be assessed,
2. Performance objectives for each metric,
3. Weighting applied to each metric (an indication of relative importance), and
4. An individual agent’s performance on each metric.
Selecting performance metrics
Traditionally, call centers have managed performance based on the goal of operational efficiency. We see this drive for efficiency continue today through the management of call center agents to metrics like average handle time, number of calls handled, after call work, etc. While these are very important metrics, the fallacy is in managing a call center to only these internally-focused metrics. Customers do not care how much time an agent has to spend filling out paper-work or electronic forms after the call ends. Rather they care that an agent is available within a reasonable amount of time when they call. Customers care even less about how long they need to spend on the phone with a (single) call center agent, as long as the problem has been resolved with that call. A 30-minute call might end with a delighted customer, a frazzled call center manager and a very confused agent. Are you seeing the disposition toward schizophrenia now?
The key to success is in selecting a variety of metrics that speak to the customer experience and balancing them with the business need for efficiency (we will speak more about this balance when we talk about scorecard weighting). Best-in-class business partners also incorporate other data sources into their agent scorecards such as internal quality monitoring data, chat, text, SMS and email data, etc.
Setting performance objectives
We’ve all been in situations where a goal was picked out of thin-air by a well-intentioned executive and then carved into stone for us to follow. In the absence of this scenario, the best set goals are based on actual historical performance. Important elements to consider in setting performance goals are:
- Mean or Median? (measures of central tendency)- In order to set a goal for future performance, we must first have an understanding of how we’ve performed in the past. Measures of central tendency indicate the point on a performance continuum where the members of a group or dataset tend to gather. While the mean (often referred to as the average) is more widely-reported in call centers, it is most useful in groups whose performance is relatively normal (normal from a statistical standpoint, that is). A normal distribution is one in which a majority of group member performance is centered around the middle of the performance continuum and the distribution of performance is perfectly symmetrical to the right and left– in short, a bell curve. Unfortunately, this type of distribution is not typical of call center performance. As such, the median (the point at which half of the group’s members fall above and below) may be a better way to determine how the call center “typically” performs on any given metric.
- Time frame of historical data– Having decided whether the mean or median will be the most appropriate statistic for determining a baselineof past performance, we must now define a time frame to represent history. At bare minimum, Customer Relationship Metrics recommends that at least three months of data be used to minimize the impact of anomalies in performance and non-normative events impacting performance. Ideally, a larger time frame would be used which encompasses all stages of a company’s business cycle or seasons (1 year). The danger in using more than a single year of historical data to establish a performance baseline is the possibility of negating or underplaying recent performance gains – essentially making the performance goal too easy to achieve.
- Predicting the future – Once a historical baseline of performance has been established, the same data set can then be used to make predictions about future performance (statistical modeling). Performance objectives can then be based around those predictions. Some business partners have also found some success if applying a 5% to 7% “lift” to historical performance and using that lift as the performance goal for the following year.
The weighting applied to each metric on a scorecard indicates its relative importance to the call center and to the larger organization. Before arbitrarily applying weighting or points to each metric, think about the organizational goals that have been set for the fiscal year and the ways in which the call center contributes to these goals. Doing so will help you make the first critical decision – whether to focus on the customer’s experience or on organizational costs. Weighting within each category of metrics (operational vs. customer experience, etc.) can then be determined based on the degree of impact each metrics has on the category outcome (ex: issue resolution has a higher impact on customer experience than courtesy, so issue resolution should have a higher point or weighting allocation associated with it).
Individual agent performance
If one of your goals in implementing a balanced agent scorecard is to keep agents informed about their performance and incite healthy competition, ensuring that your agents have ready access to accurate scorecards will be a key determinant in the success of the initiative.
During one of my recent visits to a business partner, I took my usual walk through the call center and was quite pleased to see the number of agents who were logged in to Customer Relationship Metrics’ MPM real-time agent scorecards. MPM (Metrics Performance Manager) is a reporting tool that Customer Relationship Metrics uses as part of our applied business intelligence services to gather data from disparate sources. We’ve found that one of the outputs of this reporting tool that can be very motivating to agents are the scorecards. In this call center, agents were actively managing their own performance and receiving immediate feedback from the system about the changes they were making to their interactions with customers. Feedback from the ACD about their efficiency, feedback from customer satisfaction surveys, feedback from the Quality Assurance team are all in a single location, updated in real-time. Imagine the burden you would remove from your supervisors if your agents were that tuned-in to their own performance!
Yesterday, I put it out there regarding a PITA and the barriers in doing business with organizations. In Part 2 of our 3-part series, it’s time to find out how you uncover that you are a PITA complete with customer barriers. I’ll share the high level details of a case study where we uncovered a major barrier with a business partner. This barrier was such a large pain in the ass (PITA) that it was significantly affecting the brand. Continue reading
We all have someone in our life that is difficult to deal with or just plain obnoxious. Maybe it’s a neighbor or a sister in-law or even an employee. Whoever it is, we often leave a conversation with them thinking, “wow, he is a real pain in the you-know-what!” He’s a P.I.T.A. or otherwise known as a Pain In The Ass. For me, it’s my cousin Debbie. Don’t get me wrong; she’s my family and I love her, but she’s one of those people who is never happy, complains about everything and loves to make things difficult for others…a real P.I.T.A. Continue reading
How do we ensure that customer experience results are a profitable business process in the call center and elsewhere in the organization? To increase the value of the initiative, be certain that the research is done the right way, and not only done for the sake of surveying customers. Note that customer feedback results will be used by colleagues regardless of the number of caveats listed in the footnotes, so be diligent in providing valid and credible customer intelligence from your contact center. The consequences of a poor measurement program and inaccurate reporting can have profound and far-reaching effects on your credibility in the organization.
Put another way, are you guilty of survey malpractice by giving your company faulty information based on inadequate research methods and interpretations?
Malpractice is a harsh word — it directly implies professional malfeasance through negligence, ignorance or intent. Doctors and other professionals carry insurance for malpractice in the event that a patient or client perceives a lack of professional competence. For contact center professionals and other managers, there is no malpractice insurance to fall back on for acts of professional malfeasance, whether they’re intentional or not. Of course, it is much more likely that one would be fired than sued for bad acts, but that offers little comfort.
Never put yourself in a position where your competence can be called into question. That’s why so many call center managers are “skating on thin ice” when it comes to their customer satisfaction measurements: there are demonstrable failings with many of the typical practices used by call center managers. By definition, an ineffective measurement program generates errors from negligence, ignorance and/or intentional wrongdoing. You have a fiduciary responsibility to your company — and recommendations made based on erroneous customer data do, indeed, meet the definition of malpractice.
Measurement programs must meet certain scientific criteria to be statistically valid with an acceptable confidence level and level of precision or tolerated error. Without these considerations, you are guilty of survey malpractice. Defending your program with statements like, “it has always been done this way” or “we were told to do a survey” is not sufficient. Research guidelines adhered to in academia apply to the business world, as well. A deficient survey yields inaccurate data and results in invalid conclusions no matter who conducts it. Unnecessary pain and expense are the natural outgrowths of such errors of judgment.
To maximize the return on investment (ROI) for the EQM customer measurement program, and to ensure that the program has credibility, install the science before collecting the data. Make sure that the initial program setup is comprehensive. If there is no research expert on staff, then hire this out to a well-credentialed expert. The alternative is to train someone in the science around creating and interpreting the gap variable from a delayed measurement. Or better still; engage a qualified expert to design a program to measure customer satisfaction immediately after the contact center interaction.
Before assuming that survey malpractice does not or will not apply to your program, consider the following tell-tale signs of errors and biases, as they are critical to a good program.
1. Measuring too many things. Your survey of a five-minute call center service experience takes the customer 15 minutes to complete and includes 40 questions. While everyone in your organization has a need for customer intelligence, you should not be fielding only one survey to get all of the answers.
Should the call center be measuring satisfaction with the in-home repair service, the accounting and invoicing process, the latest marketing campaign, or the distribution network? Certainly input on these processes is necessary, but don’t try to get it all on a single survey.
2. Not measuring enough things. An overall satisfaction question and a question about agent courtesy do not make a valid survey. Without a robust set of measurement constructs, answers to questions will not be found. Three or four questions will not facilitate a change in a management process; nor will they enable effective agent coaching or be considered a valid measure to include in an incentive or performance plan.
3. Measuring questions with an unreliable scale. In school, everyone agreed on what tests scores meant: 95 was an A, 85 was a B, and 75 was a C. Everything in between has its own mark associated with it, as well. Yet, when it comes to service measurement, we tend to give customers limited responses. What do the categories excellent, good, fair and poor really mean? Offering limited response options does not permit robust analysis, and statistical analysis is often applied incorrectly. In addition, using a categorical scale or a scale that is too small (like many typical 5-point survey questions) is not adequate for the evaluation of service delivery.
4. Measuring the wrong things or the right things wrong. Surveys should not be designed to tell you what you want to hear, but rather what you need to hear. Constructs that are measured should have a purpose in the overall measurement plan. Each item should have a definitive plan for use within the evaluation process. The right things to measure will focus on several overall company measures that affect your center (or your center’s value statement to the organization), the agents and issue/problem resolution.
5. Asking for an evaluation after memory has degraded. When we think about time, 24 to 48 hours doesn’t seem that long. But when you’re measuring customer satisfaction with your service, it’s the difference between an accurate evaluation and a flawed one. Do you remember exactly how you felt after you called your telephone company about an issue? Could you accurately rate that particular experience 48 hours later, after other calls to the same company or other companies have been made? That’s what you’re asking your customers to do when you delay measurement. It opens the door to inaccurate reporting and compromised decision-making, and is also an unfair evaluation of your agents.
Conducting follow-up phone calls to gather feedback about the center’s performance is a common pitfall. While the research methodology certainly should have its place in the company’s research portfolio, it’s less effective than using point-of-service, real-time customer evaluations.
Mail and phone surveys are useful for research projects that are not tactical in nature, but rather focused on the general relationship, product feature, additional options, color, etc.
6. Wiggle room via correction factors. If you’re using correction factors to account for issues in the data or to placate agents or the management team, some aspect of the survey design is flawed. A common adjustment is to collect 11 survey evaluations per agent and delete everyone’s lowest score. However, with a valid measurement that includes numeric scores, as well as explanations for scores and a rigorous quality control process, adjustments in the final scores will not be necessary. Making excuses for the results or allowing holes to be poked in the effort diminishes and undermines the effectiveness of the program, and highlights an opening for survey malpractice claims.
7. Accuracy and credibility of service providers and product vendors. As with any technology or service, the user assumes responsibility for applying the correct tool, or applying the tool correctly.
There are plenty of home-grown or vendor-supplied tools to field a survey, but, again, if you do not apply the functionality correctly, you will be responsible for the error. Keep in mind that some service providers are only interested in selling you something that fits into their cookie-cutter approach, and it will not be customized to your specific requirements.
~ Dr. Jodie Monger, President
This post is part of the book, “Survey Pain Relief.” Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions. Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs.
“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice. These programs are all too often accepted as valid by the unskilled and unknowing. Inside is your chance to gain knowledge and not be a victim of being lead by the blind. For more information http://www.surveypainrelief.com/
“Survey” Photo Credit: The University of York www.york.ac.uk/…/training/gtu/staff/cros.htm
Trust is the foundation for every relationship. In our personal relationships, bonds with family members, friendships and marriage are built on this one word, trust. Let’s face it: people do business with people that they trust too. Just recently, I went to a dealership to buy a new car. I fell in love with the car, but didn’t trust the guy trying to sell it to me. I walked away and right on over to the dealership down the street.
Gary Lemke, Publisher for CRMAdvocate recently posted a blog that speaks to the need for organizations to be consistently authentic. I couldn’t agree more. Authenticity needs to not only be consistent; and it needs to be apparent across the enterprise. Trust has to live in the face to face transactions of the retail store world, and also in your call centers via your call center agents. If your customers are lacking faith in your authenticity, you better believe they will let you know.
“I just think I could get my TV fixed a little quicker than this. I called the company I’m talking to now and they said they outsourced it to another company. The other day, when I called and was talking to them, I was told that, that company outsourced it to another company. How about outsourcing it to somebody that will come out here and fix my TV? That’s all I ask, for somebody to come out and fix it.”
“When I call, asking for information about something being shipped, and you lie to me and tell me it’s already in the hands of FedEx, I don’t know how else you can improve other than to quit lying and fess up that the d*mn thing didn’t ship. Pretty clear for you?”
“Every time I call, I get a different story from you people. One person tells me technical support can handle my issue. So I am transferred to technical support. Technical support tells me they can’t help, but customer service can. When I talked to customer service, they explain that it has to go through sales. The sales department doesn’t want to hear from me about this stupid laptop not working, they just want to sell me some more of your crap. I’m sick of it! I have yet to hear the same answer twice to the same exact question from anyone! Stop telling me lies and fix my d*mn laptop.”
“I’m just getting the runaround and cannot get concrete answers. It’s like I’m speaking to a politician. Thank you.”
~ Dr. Jodie Monger, President
Photo Credit: www.callcentercomics.com
Consider your average number of calls per month, say 80,000 for example, at an average fully loaded cost of $8.00 per call. You are looking at $640,000 to cover these customer interactions taking place in your call center. Now consider your repeat calls, say 30%, and you can see that it’s costing $192,000 to have non productive, low value calls. We all know why First Call Resolution (FCR) is so important! “Knuggets” like these can capture reasons why multiple calls to your call center are needed, the root cause of the problem (call center agent, process or product problem) and help you get closer to an ideal FCR percentage with a few changes to your organization.
“It took four phone calls to get a pink slip. I’ve paid the car off, I deserve the pink slip. The first call I made said I would get it in 10 days; it’s now been 6 weeks. This phone call said it was mailed yesterday. Somehow I doubt that, but we’ll see. If I don’t get it, I’ll call you back. I don’t mind. I’m retired. I’ve got nothing to do but call you folks until I get what I need.”
“The agent was very helpful today in resolving my issue. She’s the only person who has ever found out where my limited warranty was on page 35 of the manual. I’ve talked to numerous people at your place, people at the store, and people with your company. I’ve talked to at least 12, maybe 15 people. Finally she got everything settled to the best of her ability. I am very appreciative. I would name my child after her. Thank you.”
“Maybe someone three calls ago could have told me to look behind the webpage for the date range selection box? There is no reason for someone to assume that it’s there and just hiding. And, telling me that this happens a lot and not to feel bad for calling makes me no longer happy that I got help but mad that I needed it in the first place.”
“You’d think you would tire of hearing how to avoid this common problem that I know everyone and their brother is having. How do I know? I took a poll at my son’s high school baseball game last week, and at the grocery store and the bank. Maybe you are waiting for me to be hired as a consultant? I’m not interested because you can’t fix stupid.”
Still struggling with FCR? Find out how you can improve First Call Resolution now!
A few weeks ago, I was reading an interesting article about schizophrenia. It talked about the statistics, symptoms and treatment for this terrible disease. At first I was alarmed by the recent research numbers, an estimated 3.2 million Americans suffer from this mental illness. Wow. As I read on, I learned that four types of “delusions” exist in schizophrenics, and from that list of four, “Delusions of Control” is one that really struck a chord with me. Naturally, I started to draw some parallels between this particular symptom and people I know, myself and those in my line of work. I do believe it’s fair to say that based on the delusion of control alone, we all have a touch of schizophrenia from time to time. Perceived control is a way of life in the call center.
When reflecting on the life inside a call center, it’s easy to believe that we are patients that are often not medicated to control our delusions. The call center as an asylum may not be a stretch! Not only is it insanely intense, it is also a place of constant contradiction. We often have expectations of our employees and our call center agents to adhere to a specific model intended to produce a controlled response (a great service experience). In the same breath, we also expect that model to produce the opposite results (do it fast, right and cheap). Isn’t this setting your team up to feel schizophrenic? We allow agents to believe they are in control, but in reality, they are not.
I was reminded of this parallel when speaking with one of our partners last week. This particular client had three service centers that were using the “Pay for Performance” model with their agents. As he elaborated on the damages this was causing, I began to recall the correlation between my recent revelation on call center schizophrenia and the “Pay for Performance” model (particularly in service orientated call centers.) In this particular model, agents are being paid based on metrics such as number of calls handled and number of minutes spent on those calls. This is the expectation set forth. At the end of the month, organizations are left scratching their heads as to why customer satisfaction scores are so low. Well, the innate service component is being squished out of the agent as they are trying to hurry on to the next caller. But yet, we are expecting an outstanding customer service experience to come from our service orientated call center, right? Insanity in its true form and we’ve all had this conversation with ourselves and everyone on the management team.
This will be the first in a two-part series focusing on designing the perfect, or as-perfect-as-you-can-get, model for service call centers. Part One will discuss the “Pay for Performance” model, how it has been incorporated in the service call centers and how it is affecting your agents and your customer service scores. Part Two will discuss how to build effective balanced scorecards and, in turn, a more appropriate model to your service call centers. We need to control the insanity!
What is “Pay for Performance?”
“Pay for performance” also known as incentive pay, rewards workers based on the outcomes they achieve as opposed to the traditional model of paying for time worked. These models have been wildly popular in outbound telemarketing for many years, advancing the earning potential of skilled salespeople while “weeding out” those who in a conventional pay model would largely rely on their base salary to pay their bills. More sophisticated (sales) incentive pay models financially penalize agents for “buyer’s remorse,” encouraging quality sales acquisition methods.
Sales vs. Service
While time and outcome-pressured compensation models may work in a sales environment, they represent the antithesis of what is needed in the service world. Conventional wisdom states that in a sales environment, there is only one outcome that matters — sales that “Stick”. Certainly there are complexities in how a call center agent reaches a “Yes,” but that does not negate the fact that there is only one outcome that is guiding the call flow. A customer service call center is far more complex. Customer service call center agents are tasked with resolving calls in a manner that is pleasing to customers, builds brand loyalty while remaining sensitive to everyone’s time – the customer on the phone and the one waiting to be helped. That is quite a tall order especially when a case can be made that the Sales team is often responsible for the call to the Service team. At the end of the day, incenting agents based on a single outcome may expose your organization to a very high level of business risk.
“Why are my customer service scores so low?”
The correlation between time spent and outcome is much more fluid. Let’s examine some of the unfortunate outcomes of ill-conceived pay for performance models in customer service centers:
As an assignment, add your metrics to this list and evaluate them against the delusion of control construct.
“If not “Pay for Performance”, then what should we use?”
In a service center, a balanced agent scorecard is a far more effective way to pay and incent agents. Balanced scorecards force agents and their managers to focus their attention on more than a single Key Performance Indicator (KPI). Some of CRM’s existing customers have access to an important tool which assists them in determining the relative importance of agent skills, from the perspective of the customer – predictive (regression) modeling. In the figure below, the beta levels on the left indicate the level of impact each agent skill had on the customer’s overall perception of the agent’s performance. The right side of the figure indicates the current performance level of that skill (on a 1 to 9 scale).
One business partner uses this regression output to not only set priorities within their agent scorecard, but to also set priorities for ongoing / developmental training for the upcoming quarter. The figure below indicates the degree of improvement in customer satisfaction this business partner has been able to achieve by linking customer, agent and training priorities.
Companies using the “Pay for Performance” model in their service call centers will remain to be at war with themselves. If you are paying your agents on the number of calls they take, then you will get a high number of repeat callers and lower FCR rates due to customers being rushed off the call. If you are currently using the “Pay for “Performance” model in your service center, have you experienced similar results?
Now that we identified a much more suitable, more effective model to adopt in your service call centers, it’s time to discuss “how.” Part two will discuss just how you can build effective balanced scorecards to incentivize your agents.
365 Days of Delivering Elite Customer Experiences in your Call Center: Customer-centric sweat and celebration
Throughout the years of raising my children, my wife and I have come to realize what truly motivates them. All 3 of my kids, though different in their passions, hobbies, activities and personalities, share the same thread of an intrinsic motivator. That thread is praise and recognition. In my household, my children are not rewarded for good grades or pretty artwork, but rather are rewarded for the work effort. We are more proud of the commitment, consistency, and discipline associated with what it takes to get the 100 on that spelling test.
We at CRM know that the power of commitment, consistency, and discipline associated with being customer-centric. As such, we felt it was so important to take that same approach with our own customers. Top-line organizations that have made a proven commitment and are disciplined to customer-centricity and consistently delivered elite performance throughout the year need to be recognized. And with that, last year the Elite Customer Experience Awards were born. In January of 2010, the award winners were finalized and on February 15, 2010, these award winners were announced to the world.
About the Awards
This program is unlike any other. The winning organizations were measured on outstanding customer satisfaction performance over the entire year! For 365 days! Awards were given in several categories including: Utility Provider of the Year; Product Support Provider of the Year; Agent of the Year; Team of the Year; Outsourced Center of the Year; In-house Contact Center of the Year and the Elite Customer Experience Award. These organizations excel at transforming customer experience analytics into action. More than 230,000 real-time post-call customer satisfaction surveys were analyzed by a CRM expert on the research and client services team. Some of the award winners’ highlights include: 16.7% reduction in repeat call volume; 5% reduction in customer attrition; reduced costs to acquire new customers; increased lifetime customer value and employee engagement.
The Buzz Around the Boom
Naturally, award winning companies were thrilled with the news that they had won such a prestigious honor. We had expected this. We expected they would be proud and happy to then praise their teams. What we did not expect is the whirlwind of celebrations that would soon follow. A few of our customers have thrown Awards Parties by way of a luncheon or a happy hour. Announcements soared through the C-Suite in some organizations and some customers received praise directly from their CEOs. One CFO sent an email to the entire organization (including the Board of Directors) singing praises of the call center’s performance, he stated, “… now can say that we are DEMONSTRABLY best!!”
Our very own CEO, Dr. Jodie Monger comments, “It is with great pleasure that I congratulate these organizations for their exemplary performance and excellence in customer care. The frontline in leadership and management teams has engineered the service experience to be profitable for the organization, by highlighting the customer experience. Each has leveraged the customer intelligence from their customers to earn these awards. CRM is proud to play its part in enabling the customers to speak.”
We Tip Our Hats to…
….the 2010 award winners and honorable mentions:
- Elite Customer Experience Award – Otter Tail Power Company; Honorable Mention – Portland General Electric
- Utility Provider of the Year – Otter Tail Power Company; Honorable Mention – Portland General Electric
- Product Support Provider of the Year – HP Home and Home Office Store; Honorable Mention – Black & Decker
- Agent of the Year – Joyce Sanders, Cincinnati Children’s Hospital; Honorable Mention – Tracey Forbin, Cincinnati Children’s Hospital
- Team of the Year – Mindy McDulin Team, Cincinnati Children’s Hospital; Honorable Mention – Damian Reiter Team, Otter Tail Power Company
- Outsourced Contact Center of the Year – Michelin North America
- In-house Contact Center of the Year – Michelin North America; Honorable Mention – Otter Tail Power Company
A look into 2011 and beyond…
With the announcement of the award winners to our entire customer base during the February Customer Insights to Action Meeting (CRM’s User Group), the competitive nature amongst the organizations began to stir up. During monthly
~ Jim Rembach, Chief Spokesman
For more information: Elite Customer Experience Awards