Call Center Training
Sharing skills and knowledge with call center agents and/or managers to develop personnel increase productivity and raise morale.
We’re told all the time to ‘think outside the box’. In school it meant looking at a passage in a book to see the symbolism; that the words were more than mere words. In the call center it means something as simple as creatively solving a customer’s problems.
I recently tried to book a summer vacation house and after tireless research and dead-ends I called the Board of Tourism. There I spoke with a lovely woman that gave me countless phone numbers to try, web sites to further my research and even offered to call some of her contacts in the area to see if they could assist me. I was blown away at her resourcefulness and willingness to help. She called me back the very next day with vacation packages and pricing as well as the personal phone numbers of her contacts. Too often agents lose that can-do, problem solving spirit.
Here are some recent customer comments from External Quality Monitoring programs:
The deployment of smart meters has generated a tidal wave of data for utilities to manage and beyond the initial data storage challenge, there exist real questions about how to use and share this information with consumers. In an article published on smartgridnews.com back in 2009, Jack Danahy estimated that 140 million smart meters installed over a period of 10 years would generate 100 petabytes (1 quadrillion bytes) of information. That’s a lot of data and the effort to store this data is a wasted exercise if the analysis is never used to better project consumer demand and to help consumers better manage their consumption.
One of the utilities that Customer Relationship Metrics supports recently decided to make use of the data they were gathering, and for very good reason. According to OPOWER, an energy efficiency and Smart Grid software company, consumers who receive data about their electricity usage reduced their energy consumption by 1.8% (which, according to the EDF could curb CO2 emissions by 8.9 million metric tons annually). This utility mailed customers a snapshot of their electricity usage compared to the usage of other customers in their immediate area, along with tips on how to decrease energy usage. A company proactively informing customers how to use less of their product!!! What’s not to love? Apparently a lot. Customers who were notified that their electricity usage was comparatively high began contacting the utility’s call center in droves, complaining of over-charging, bad meter-readings and malfunctioning meters. The call center and its agents were unprepared for both the volume of calls and the negative response to the letter. And I was as surprised as everyone for the backlash. Continue reading “How to deliver bad news to ‘smart’ customers.” »
A few weeks ago I had a mishap with an electronic billpay that brought together – and then set apart -three financial institutions. Admittedly, I made a mistake in creating the electronic payment request. My local bank generated a physical check rather than transferring the funds via ACH (Automated Clearing House), and sent it on to institution #2 to process for financial institution #3 located in the United Kingdom. This error took hours of my time over a number of weeks to resolve. When it was finally over, I wanted to blast one financial institution on every social media platform I could find, wrote a thank-you letter to another and felt as indifferent about the third institution as they felt about me.
My local bank, First National Bank of Omaha took an electronic request for the transfer of funds and executed it via paper and then sent it via pony express (kidding, it was US mail), losing the tracking capabilities possible with an ACH. But the moment I called their customer service department, I had their attention and their commitment of assistance. My agent, Tania, conferenced me into First National Bank’s billpay department, inquired about next steps and stayed on the phone with me for over two hours as we made our way through the phone-tree-from-hell and more transfers than I could count at GIANT BANK (not their real name). My local financial institution received a thank-you letter, along with my business for as long as I remain a resident in their coverage area. Continue reading “You vs. your competition, head-to-head, how’d you do?” »
Many of us in call centers chase the holy grail of higher agent tenure, assuming that agents will use the additional knowledge and experience attained through tenure to better serve customers. The unfortunate reality, according to customers, the more tenured agents don’t deliver a better customer experience; they deliver a worse one, despite being armed with all of the knowledge and skills that “rookies” are thought to be acquiring. And, that customer experience continues to diminish the longer your agents languish in your call center.
During our recent Customer Insights to Action meeting (a quarterly meeting open to all of our existing business customers), Customer Relationship Metrics refreshed a 2007 study of this same subject. In 2007, analysis of the customer experience found that agent performance peaked in month 11. At the time, we hypothesized that the peak of this performance bell curve would vary based on industry, management style, new-hire training, company culture and a number of other variables. What we found just recently is that peak service performance is rated by customers when the agents’ tenure is between 9 and 11 months.
My friend Julie has taken call after call as an agent for nearly 10 years. I have it on good authority that Julie is one of the best agents out there, but it’s been my experience that Julie is the exception and not the rule. In fact, we recently completed analysis that revealed agent performance to peak and then decline at about 10-months of tenure. Customer evaluations indicate the service peak and decline is related to tenure of the agent, not time of day, month or year! This makes me think about the complaints we get from customers about the lack of knowledge and care that they receive from lackadaisical agents. When customers can voice their opinions about service engagement so quickly and publicly, a positive agent experience is paramount. Where are these agents on the tenure life cycle? Continue reading “Who knows when call center agents burnout first?” »
As part of our customer intelligence services, I take our business partners through the analysis of their survey results on a monthly basis. A common first question is often “Why is satisfaction with the company so much lower than satisfaction with the agent?” My answer is always, “for many reasons!” Agents are engaged in their jobs, are friendly, polite and responsive. Some even go out of their way to do right by the customer. What’s not to like? And who wouldn’t think that it is far easier to perceive a connection with this kind of agent than with the nebulous concept of a company. Perception of the company is also impacted by media, user reviews, competitors in the marketplace, price, product quality and much more. All of those elements are more difficult to control than a single interaction between a call center agent and a customer.
A gap between the scores is expected but that doesn’t mean it should not be validated, monitored and narrowed over time. One way to explore the elasticity of the gap between company and agent perception is to look at the gap at the individual level. Analysis of the customer comments left for agents operating at the extreme ends of the gap illuminate reasons for the ratings. (Caveat: have a sufficient sample size for each individual included in this gap analysis; a minimum of 30 surveys per individual).
The company represented in the figure below has an average gap of 8 points between satisfaction with the call center agent and satisfaction with the company. The analysis of the drivers of this gap was conducted using call recordings and comments provided for the agents who were more than two standard deviations units away from the mean.
The analysis revealed that agents with an abnormally large gap between their satisfaction score and satisfaction with the company were far more likely than other agents to speak poorly of their peers, other departments within the organization, third party vendors, partners, etc. The technique of “throwing someone under the bus” may have created a bond with the customer and earned them a higher agent satisfaction rating, but this came at the high cost of company loyalty.
Agents have not been trained to make themselves look good at the expense of someone else:
- “We’ve had so many problems with the ice-makers on that model,” “That company is so bad about submitting invoices to us. They’re always late or missing information,” “We’ve gotten a lot of complaints about those disconnect messages” and “I’m sure they’re nice people but they’re so hard to understand with their accents”
The impact of the agents’ creative deflection of responsibility are evidenced by the gap and observed in the survey comments:
- “Your customer service reps are wonderful and courteous but you don’t give them access to any information. You are not sure what time you are going to turn on service. I’ve got two small children and I’m told that service can go as late as after five o’clock, which I find is not fair. It can be six o’clock without me having electricity in my apartment and it’s 90 degrees outside. I don’t think that’s fair.”
- “It is not the representative. It is the people you hire to send out here. They need to be qualified. Apparently you’ve had problems with them in the past, so I don’t know why you keep sending them out. That’s it, period.”
- “I don’t ever really feel that anything is ever corrected. Customer service sends stuff to the claims department, and I feel that the CSRs know what they are doing and everything else, but once it gets to the claims department I don’t know what happens, but everything gets messed up all the time. I don’t even think they believe in their own claims department.”
In order to eliminate this type of behavior, agents must be accountable for company level customer feedback to some degree. By including analysis that focuses on the ratio between company and agent level ratings, those with “thrown under the bus” techniques can be identified. Agents need to understand that by deflecting and empathizing with customers in that way, they are degrading customer perception of the brand they are working to serve.
In some shape or form, at work or at home, we all want to be all-stars, right? At least I think most of us do. Call center agents, who value their jobs as well as the organization they work for and take pride in delivering top notch customer service, consistently strive for this achievement. These are your “top performing” call center agents who cannot wait to see how their customers scored their performance via the External Quality Monitoring program. Survey scores and comments is the customer currency that motivates them. In a perfect call center world, your seats would be filled with this type of All-star. The reality is, unfortunately, this is not the case but All-stars can increase the performance of the rest of the team.
Customer Relationship Metrics produces an annual Elite Customer Experience Awards program which consists of seven award categories. One category would be the All-star recognition, specifically “Agent of the Year.” The recipient is one of the tens of thousands across many types of industries of our client base. How does the annual award recipient deliver Elite customer experiences on call-after-call day-after-day throughout the year?
This All-Star has valuable coaching tips for her call center teammates that can be used by any agent. She identifies four main strategies used in every call:
1. Ask more questions than deliver statements. The Agent of the Year consciously flips the standard statement-based responses into question-based responses to increase the caller’s perception of control over decisions.
Customer: How can I make a payment on my bill?
Agent: You can make your payment by check or we can automatically deduct it from your bank account online.
Customer: How can I make a payment on my bill?
All-star Agent: Would you prefer to pay your bill by check or by setting up an automatic deduction from your bank account online?
2. Stop and listen. No one likes to be interrupted. Customers may interrupt the agent and when they do always remember that they have the right of way. The second a customer begins to speak – stop and listen. This will go a long way when it comes to agent satisfaction scoring.
3. Use a little finesse. Every call center agent has to deliver bad news to customers. Using authentic empathy when explaining to a customer that they cannot be helped and WHY they cannot be helped will leave the customer with the understanding that all was done and the relationship is valued.
4. Build rapport. People like to be treated like people not just as an account number or as an anonymous creature on the other end of the line. It helps to connect with customers on a personal level, to even share a laugh. Even a small, genuine personal connection underscores the value of the relationship for the customer.
It is not uncommon for All-star agents to have difficulty expressing how they do it. Analytics of call monitoring, operational metrics and customer feedback linking the call and the agent provide a list of your All-stars. Listen to their calls and talk with them about their service philosophy to find their list of techniques to help the entire team.
First-contact resolution has been a hot call center metric for years now. There are white papers and articles galore on the topic, and entire conferences and online forums dedicated to it. Most telling is that numerous managers have gotten “FCR Forever” and/or “One and Done” tattooed on their necks.
The majority of conversations about FCR center around two things: 1) the huge potential impact of FCR (on operational costs, customer satisfaction and agent satisfaction/retention); and 2) how the hell to measure this mega-metric accurately (no simple task, as you’ll see in my upcoming ebook, Full Contact).
What often gets lost amidst the FCR hype and the confusion surrounding its proper measurement is something even more critical: What processes, practices and tools a contact center can put in place to help improve FCR. Customers don’t care if you know how to measure FCR, they simply want you to achieve it. Following is a list of tactics to help you do just that:
Excellent agent training and tools. If your agents lack skills, knowledge and/or immediate access to key information on calls, your FCR rate is going to be lower than the average winter temperature in Greenland or the average morale level in a billing contact center. Top centers provide comprehensive new-hiring training to rookies and frequent ongoing training to veteran agents, forever keeping staff abreast of new products/services, information and approaches to help them provide the most efficient and effective service. In addition, these centers equip agents with user-friendly, fast and frequently updated desktop tools and knowledge bases that enable staff to find crucial customer data and product/service info a flash, thus reducing the number of times customers must be placed on hold, transferred, called back, or physically restrained.
World-class workforce management processes. Even the best-trained and equipped agents on the planet will die without oxygen, thus it’s critical to schedule enough staff to enable each agent to take at least two breaths between calls. Agents can’t resolve calls if they are having a stroke, or if the customer – who has been caged in the queue for 15 minutes – is screaming at them for taking so long to answer the phone. Thus, accurate forecasting and sound scheduling based on those forecasts is critical, as is mastering skills-based routing so that callers get sent to the right agent with the skill-set to handle the customer’s specific issue, and not to Bob – the quiet guy in the corner cubicle who makes paperclip sculptures of his mother.
No conflicting performance objectives. Many contact centers tell agents to focus on FCR, but then pressure them to achieve strict productivity objectives that interfere with agents’ ability to truly focus on the customer. Conflicting performance objectives are the number-one cause of agent-on-manager violence in America. Making FCR a KPI in your center but then punishing agents for not handling a certain number of calls per hour/shift or for going a little over the desired AHT average will not only hinder your center’s chances of achieving FCR success and customer satisfaction, it may result in you being killed or worse by furious frontline staff.
Incentives around FCR goal achievement. It’s always a wise practice to align agent incentives with the contact center’s and the enterprise’s performance goals. And since FCR success should be a top priority for nearly all customer care organizations, nearly all customer care organizations should reward and recognize agents when they consistently meet or exceed individual, team and center-wide FCR goals. Top contact centers do more than just order pizzas or pat staff on the back to celebrate current and propagate future FCR success; rather agents in these centers receive cash prizes, meaningful gifts/gift certificates, as well as public recognition at interdepartmental meetings and via internal newsletters/the corporate intranet. In addition to incentivizing and rewarding agents for FCR success, some centers de-incentivize and punish agents for FCR failure. This typically includes taking cash and gifts away from agents, publically humiliating them at meetings and via newsletters/the intranet, and forcing them to spend an hour alone in a room with somebody from IT.
Agents empowered to improve FCR-related processes. Your agents know customers and customer care better than anyone, assuming your center’s hiring and training programs don’t blow. Smart contact center managers actively solicit suggestions and insight from agents regarding how they may be able to enhance FCR performance. Given the opportunity, agents will tell you what tools, training and workflows are lacking, and what processes and metrics are interfering with their ability to effectively resolve customer issues. They will also tell you what color they would like the contact center to be painted and why they need a new headset that doesn’t shock their ears, so be sure to cut them off before they stray too far from the topic of FCR.
I’d love to hear some of your ideas on FCR improvement, and/or about any tattoos you have gotten to show your dedication to this key metric.
Read more: http://www.greglevin.com/index.html
Greg can also be reached via twitter @greg_levin
Photo Credit: www.callcentercomics.com
Whenever I have an opportunity to visit a business partner’s call center, I take a few minutes to conduct a rather un-scientific test, call it morbid curiosity. As I pass by cubicles and am introduced to call center staff, I always ask how agent performance is assessed. To me, the variety of responses I hear speaks volumes and perhaps helps explain the responses I get from call center agents when I pose the exact same question. Typical responses are a shrug of the shoulders, a shaking of the head and a quick glance to co-workers for reinforcement. They don’t know, feel they cannot explain the complexities or simply don’t remember.
Call center agents are expected to know and retain more and more information in today’s complex business environments. Unfortunately, our short-term and working memory capacities have not increased to accommodate this environment. Agents are also expected to generate customer perceptions that the service was excellent while managing the call to the operational metrics. Talk about feeling committed…to an asylum!
If you want your agents to feel less like they NEED to be committed but rather be committed to the customer experience, keep in mind and act in accordance with the very basics of their job expectations. And be clear about those expectations. One business partner I visited earlier in 2010 had their KPIs updated in real-time on dozens of flat screens in the call center. Another business partner created banners as a colorful reminder of the quarter’s call center initiatives. Great ideas because the agents understood why the KPIs were important, what impact they individually have on them and how they benefit from effectively performing to these. Don’t assume this is the case without testing your assumption.
A balanced scorecard can serve as a visual cue for agent success. Well-designed balanced scorecards are typically made up of four parts:
1. Metrics by which performance will be assessed,
2. Performance objectives for each metric,
3. Weighting applied to each metric (an indication of relative importance), and
4. An individual agent’s performance on each metric.
Selecting performance metrics
Traditionally, call centers have managed performance based on the goal of operational efficiency. We see this drive for efficiency continue today through the management of call center agents to metrics like average handle time, number of calls handled, after call work, etc. While these are very important metrics, the fallacy is in managing a call center to only these internally-focused metrics. Customers do not care how much time an agent has to spend filling out paper-work or electronic forms after the call ends. Rather they care that an agent is available within a reasonable amount of time when they call. Customers care even less about how long they need to spend on the phone with a (single) call center agent, as long as the problem has been resolved with that call. A 30-minute call might end with a delighted customer, a frazzled call center manager and a very confused agent. Are you seeing the disposition toward schizophrenia now?
The key to success is in selecting a variety of metrics that speak to the customer experience and balancing them with the business need for efficiency (we will speak more about this balance when we talk about scorecard weighting). Best-in-class business partners also incorporate other data sources into their agent scorecards such as internal quality monitoring data, chat, text, SMS and email data, etc.
Setting performance objectives
We’ve all been in situations where a goal was picked out of thin-air by a well-intentioned executive and then carved into stone for us to follow. In the absence of this scenario, the best set goals are based on actual historical performance. Important elements to consider in setting performance goals are:
- Mean or Median? (measures of central tendency)- In order to set a goal for future performance, we must first have an understanding of how we’ve performed in the past. Measures of central tendency indicate the point on a performance continuum where the members of a group or dataset tend to gather. While the mean (often referred to as the average) is more widely-reported in call centers, it is most useful in groups whose performance is relatively normal (normal from a statistical standpoint, that is). A normal distribution is one in which a majority of group member performance is centered around the middle of the performance continuum and the distribution of performance is perfectly symmetrical to the right and left– in short, a bell curve. Unfortunately, this type of distribution is not typical of call center performance. As such, the median (the point at which half of the group’s members fall above and below) may be a better way to determine how the call center “typically” performs on any given metric.
- Time frame of historical data– Having decided whether the mean or median will be the most appropriate statistic for determining a baselineof past performance, we must now define a time frame to represent history. At bare minimum, Customer Relationship Metrics recommends that at least three months of data be used to minimize the impact of anomalies in performance and non-normative events impacting performance. Ideally, a larger time frame would be used which encompasses all stages of a company’s business cycle or seasons (1 year). The danger in using more than a single year of historical data to establish a performance baseline is the possibility of negating or underplaying recent performance gains – essentially making the performance goal too easy to achieve.
- Predicting the future – Once a historical baseline of performance has been established, the same data set can then be used to make predictions about future performance (statistical modeling). Performance objectives can then be based around those predictions. Some business partners have also found some success if applying a 5% to 7% “lift” to historical performance and using that lift as the performance goal for the following year.
The weighting applied to each metric on a scorecard indicates its relative importance to the call center and to the larger organization. Before arbitrarily applying weighting or points to each metric, think about the organizational goals that have been set for the fiscal year and the ways in which the call center contributes to these goals. Doing so will help you make the first critical decision – whether to focus on the customer’s experience or on organizational costs. Weighting within each category of metrics (operational vs. customer experience, etc.) can then be determined based on the degree of impact each metrics has on the category outcome (ex: issue resolution has a higher impact on customer experience than courtesy, so issue resolution should have a higher point or weighting allocation associated with it).
Individual agent performance
If one of your goals in implementing a balanced agent scorecard is to keep agents informed about their performance and incite healthy competition, ensuring that your agents have ready access to accurate scorecards will be a key determinant in the success of the initiative.
During one of my recent visits to a business partner, I took my usual walk through the call center and was quite pleased to see the number of agents who were logged in to Customer Relationship Metrics’ MPM real-time agent scorecards. MPM (Metrics Performance Manager) is a reporting tool that Customer Relationship Metrics uses as part of our applied business intelligence services to gather data from disparate sources. We’ve found that one of the outputs of this reporting tool that can be very motivating to agents are the scorecards. In this call center, agents were actively managing their own performance and receiving immediate feedback from the system about the changes they were making to their interactions with customers. Feedback from the ACD about their efficiency, feedback from customer satisfaction surveys, feedback from the Quality Assurance team are all in a single location, updated in real-time. Imagine the burden you would remove from your supervisors if your agents were that tuned-in to their own performance!
A few weeks ago, I was reading an interesting article about schizophrenia. It talked about the statistics, symptoms and treatment for this terrible disease. At first I was alarmed by the recent research numbers, an estimated 3.2 million Americans suffer from this mental illness. Wow. As I read on, I learned that four types of “delusions” exist in schizophrenics, and from that list of four, “Delusions of Control” is one that really struck a chord with me. Naturally, I started to draw some parallels between this particular symptom and people I know, myself and those in my line of work. I do believe it’s fair to say that based on the delusion of control alone, we all have a touch of schizophrenia from time to time. Perceived control is a way of life in the call center.
When reflecting on the life inside a call center, it’s easy to believe that we are patients that are often not medicated to control our delusions. The call center as an asylum may not be a stretch! Not only is it insanely intense, it is also a place of constant contradiction. We often have expectations of our employees and our call center agents to adhere to a specific model intended to produce a controlled response (a great service experience). In the same breath, we also expect that model to produce the opposite results (do it fast, right and cheap). Isn’t this setting your team up to feel schizophrenic? We allow agents to believe they are in control, but in reality, they are not.
I was reminded of this parallel when speaking with one of our partners last week. This particular client had three service centers that were using the “Pay for Performance” model with their agents. As he elaborated on the damages this was causing, I began to recall the correlation between my recent revelation on call center schizophrenia and the “Pay for Performance” model (particularly in service orientated call centers.) In this particular model, agents are being paid based on metrics such as number of calls handled and number of minutes spent on those calls. This is the expectation set forth. At the end of the month, organizations are left scratching their heads as to why customer satisfaction scores are so low. Well, the innate service component is being squished out of the agent as they are trying to hurry on to the next caller. But yet, we are expecting an outstanding customer service experience to come from our service orientated call center, right? Insanity in its true form and we’ve all had this conversation with ourselves and everyone on the management team.
This will be the first in a two-part series focusing on designing the perfect, or as-perfect-as-you-can-get, model for service call centers. Part One will discuss the “Pay for Performance” model, how it has been incorporated in the service call centers and how it is affecting your agents and your customer service scores. Part Two will discuss how to build effective balanced scorecards and, in turn, a more appropriate model to your service call centers. We need to control the insanity!
What is “Pay for Performance?”
“Pay for performance” also known as incentive pay, rewards workers based on the outcomes they achieve as opposed to the traditional model of paying for time worked. These models have been wildly popular in outbound telemarketing for many years, advancing the earning potential of skilled salespeople while “weeding out” those who in a conventional pay model would largely rely on their base salary to pay their bills. More sophisticated (sales) incentive pay models financially penalize agents for “buyer’s remorse,” encouraging quality sales acquisition methods.
Sales vs. Service
While time and outcome-pressured compensation models may work in a sales environment, they represent the antithesis of what is needed in the service world. Conventional wisdom states that in a sales environment, there is only one outcome that matters — sales that “Stick”. Certainly there are complexities in how a call center agent reaches a “Yes,” but that does not negate the fact that there is only one outcome that is guiding the call flow. A customer service call center is far more complex. Customer service call center agents are tasked with resolving calls in a manner that is pleasing to customers, builds brand loyalty while remaining sensitive to everyone’s time – the customer on the phone and the one waiting to be helped. That is quite a tall order especially when a case can be made that the Sales team is often responsible for the call to the Service team. At the end of the day, incenting agents based on a single outcome may expose your organization to a very high level of business risk.
“Why are my customer service scores so low?”
The correlation between time spent and outcome is much more fluid. Let’s examine some of the unfortunate outcomes of ill-conceived pay for performance models in customer service centers:
As an assignment, add your metrics to this list and evaluate them against the delusion of control construct.
“If not “Pay for Performance”, then what should we use?”
In a service center, a balanced agent scorecard is a far more effective way to pay and incent agents. Balanced scorecards force agents and their managers to focus their attention on more than a single Key Performance Indicator (KPI). Some of CRM’s existing customers have access to an important tool which assists them in determining the relative importance of agent skills, from the perspective of the customer – predictive (regression) modeling. In the figure below, the beta levels on the left indicate the level of impact each agent skill had on the customer’s overall perception of the agent’s performance. The right side of the figure indicates the current performance level of that skill (on a 1 to 9 scale).
One business partner uses this regression output to not only set priorities within their agent scorecard, but to also set priorities for ongoing / developmental training for the upcoming quarter. The figure below indicates the degree of improvement in customer satisfaction this business partner has been able to achieve by linking customer, agent and training priorities.
Companies using the “Pay for Performance” model in their service call centers will remain to be at war with themselves. If you are paying your agents on the number of calls they take, then you will get a high number of repeat callers and lower FCR rates due to customers being rushed off the call. If you are currently using the “Pay for “Performance” model in your service center, have you experienced similar results?
Now that we identified a much more suitable, more effective model to adopt in your service call centers, it’s time to discuss “how.” Part two will discuss just how you can build effective balanced scorecards to incentivize your agents.
Work-at-Home Agents Damage Net Promoter and Customer Satisfaction. Is this Preventable? A Call Center Case Study
The remote agent model is compelling for many reasons – from elimination of the cost of physical work space, to decreased employee attrition and the higher caliber of employee that can be hired once geographical limitations disappear. At the recent Frost & Sullivan Customer Contact 2010 Event, Michael DeSalles, Strategic Analyst of Contact Centers stated, ‘It is estimated that the work at home agent model is growing by 40% annually’. Attrition among work at home agents is only 10% compared to attrition rates of nearly 50% in typical call centers. The ability to fill agent positions with individuals who have college degrees (80% of work at home agents do) and management experience while reducing overhead is an intriguing proposition to many organizations.
We recently covered this case study during our quarterly Customer Insight to Actions (CIA) user group meeting. As an applied Business Intelligence services firm we deliver actionable insights and best practices from data that is mined from voice of the customer, voice of the employee, and call center performance metrics. This case study was shared anonymously by one of our business partners after they deployed a call center remote agent model.
In the graph below, from the beginning, you see that the wok-at-home call center agents performed worse when customer’s rated the likelihood to recommend the company after their service experience (Net Promoter). The call center began working on correcting their work at home agent model as a result of this business intelligence soon after deployment but as you see recovery has been slow.
Here you get a similar perspective by looking at performance with call center agent satisfaction.
So why did net promoter scores and customer satisfaction drop when one call center sent their top-performing agents home to work? Despite significant planning to address the technological challenges of remote workers, this call center quickly discovered the down-side of their work at home call center agent model. Previously top-performing agents struggled to maintain even average-level performance.
As a result of their re-engineering of their work-at-home agent model the call center considered many things not previously considered.
A few items you may consider are:
1. Have we selected the right employees: The ability to perform well in your call center may not guarantee a high performance level will be maintained once the agent goes home to work. When selecting remote agents, certainly job competence has to be a key factor, but is it the only factor to consider? Do the agents that are effective in working from remote locations share similar characteristics? Additionally:
- Are they the self-motivated ones that strive to out-perform their peers and their own historical performance because of the satisfaction it brings them, not the praise they may receive from others?
- Are these the “low maintenance” call center agents? Do the supervisors give them little supervision or direction to complete their job responsibilities. Will this still be true when they work-at-home?
- Do these call center agents typically learn new systems, platforms or programs quicker than others? Do they have a natural interest in technology and can therefore help (not impede) remote trouble-shooting?
2. Do you need a different set of expectations? Is one of the key factors to a successful remote agent model consistency of customer experience? A customer should not be able to tell whether an agent is working from an office with dozens of other call center agents around him / her or in a home office. Of course, no dogs, screaming children, deliveries, plumbers or televisions in the background. Do the call center agents realize that the ability to work from home is a privilege – a privilege that is contingent on them maintaining a high level of performance?
3. Do you need a different plan for coaching & training? Is simply requiring that remote agents dial into regular office training sessions sufficient? Much like face-to-face training, does remote training need to be designed to teach individuals in all of the different ways in which they learn – by sound, by visual instruction and by tactile experience (doing while training)?
4. Do you need mandatory in-office events to keep agents engaged and entrenched in company culture? This consideration was the most controversial among with participants of the CIA Meeting, as it requires companies to limit their recruitment to within 40-50 miles of the company’s location. Proponents of this approach cited the ability to have remote agents work from an office location in the event of internet, phone or electricity outage, as well as the opportunity to maintain engagement through regular face-to-face contact. Do you need this?
The findings from our user group meeting case study discussion were not to construct the perfect call center at-home-agent model. Instead, it was meant to provoke thought. Because one thing is certain, you must think in a totally different manner than you are used to when designing an at-home agent model. If you don’t, the risk of failure is high and the road to recovery is long.
 Michael DeSalles, Strategic Analyst, Frost & Sullivan – April 19, 2010
“We got calls in Queue!” How Call Center Agents “should” respond to longer wait times…a Case Study in Call Center Analytics.
At CRM we use numerous proprietary and universal analytics, methods and tools. To understand the key drivers of desired outcomes we often use regression analysis. Regression analysis helps to not only capture Business Intelligence, but it can easily translate into both strategic and tactical plans for refining your relationship with your customers and your organization. Properly identifying and aligning your customers’ priorities to your organizational activities (customer-centricity), has been a focal point in our recent Customer Insights to Action – User Group discussion topics, quarterly reviews and monthly EQM meetings. It is also a major focus for the competitive marketplace as a whole.
We support and teach call centers on how to use Business Intelligence created from our analysis to identify agent coaching, monitoring and training priorities. Skills and behaviors that rank high among customers are given the greatest attention to detail, while call center agent performance within the lower customer priorities, is maintained.
Extenuating circumstances, such as abnormally high wait time, the launch of a new knowledgebase or CTI system call for more frequent use of this key tool because you need to adjust to customer needs. The skills that your call center agents draw upon during the normal course of their day are quite different from those required to calm a frenetic or incensed customer who has waited longer than their liking to reach an agent. Regression analysis can identify the skills and behaviors which can help your call center agents successfully navigate this thorny situation. This type of activity is what separates “the best” from the rest.
The regression analysis below is based on a three-month time frame in which the business partner’s call center operated within the set Average Speed of Answer (ASA) goal. This regression analysis indicates that the behaviors or skills most valued by customers (in declining order of importance) are gaining the customer’s confidence in the information presented and quickly understanding the reason for the call.
The following regression analysis was generated for the same call center during a two-month period when ASA exceeded goal by approximately 30%. Note the dramatic shift in customer priorities. When wait time exceeded customer tolerance, the agent’s ability to quickly understand the reason for the call (and communicate this understanding) became by far, the largest driver of the customer’s satisfaction with the call center agent. The importance of all other agent skills dropped off, as the one skill that pointed to the efficiency of the remainder of the call emerged as key.
So at what point on the wait-time-continuum do call center agents need to shift their approach to customers? The answer will be different for every industry and business partner, but the good news is that you already have all the data you need to answer this question! By “marrying” customer survey records to the wait time they experienced, you can determine the point in (wait) time when overall customer experience and brand loyalty begin to significantly degrade.
What is your customer-centric tipping point for Average Speed of Answer (ASA)? Or do you use a call center benchmarking status quo? When do your agents know to do something different? Are you coaching and training them to be more customer-centric (and successful) with Business Intelligence such as this? One thing is certain…your customer expects it.
The old saying is true, “Good news travels fast, bad news travels faster.” When it comes to your customers, tales of a bad experience spread like wildfire. You better believe that an unhappy customer is telling everyone they know about their ordeal. Family, friends, colleagues at work, strangers in line at the grocery store, anyone who will listen will hear their tale of the terrible customer service experience they had with your organization.
To add more fuel to the fire, within minutes, the incident is posted on their Facebook, Linkedin and Twitter for all the world to see. Who knows? Maybe they become one of the millions who create groups to spread the evil word about your company. Essentially, this customer has turned into a terrorist to your company, a terrorist to your brand, destroying all that you have worked hard to build. Sound a bit dramatic? It’s not. We’ve seen this time and time again.
Let’s look at new customers first. Your very first interaction with a new customer is a vital point in a budding relationship. A great first impression helps build a halo effect, where the positive experience with the contact center “bleeds over” to create a positive impression of the entire organization. The same can be said with existing customers. Existing customers stay loyal because of the nurtured, good customer experiences they’ve had thus far each and every time they’ve engaged with your contact center. For either of the two, one bad experience with any of your agents could sour the relationship and potential relationship, forever.
So, what’s a major factor in long-term customer retention? Consistency. Consistency is key in creating advocacy. How can you ensure you are being consistent with your service? Here are some steps that will increase your customer’s satisfaction, advocacy, and ultimately, your company’s bottom-line:
Step 1: Train consistently. New-hire training is your company’s first opportunity to communicate to your new employees the company’s vision for its future and how their job fits within this vision. Explaining to new employees how their performance will be assessed is a key first step in aligning company vision to individual priorities. Educating each new employee in the same manner on company policies, products, services, and how to use the resources available to them to reach and exceed expectations will ultimately determine whether customers experience consistency across their interactions with you.
Step 2: Set consistent expectations for performance. The contact center is among the most measured segment of any organization. We collect dozens if not hundreds of metrics about every contact center agent; measures of efficiency, compliance, quality, customer experience, etc. With all of those metrics, it is all too simple to send contact center agents a mixed message about what is important and what is a top priority. If your company’s aim is to satisfy and delight your customers, the performance expectations you communicate to your contact center agents must perfectly align with this focus.
In all stages of the customer lifecycle, new and budding or existing and nurtured, you want to ensure you are delighting your customers each time they contact your organization. In today’s social and economic environment, it’s vital to grow tulips, not terrorists. Consider consistency, your magic beans.