call center data reporting

Trash the Canned Emails in Your Call Center

Nearly a year ago, I wrote a blog entitled Self-serve: Cheap can be very expensive about the high customer experience cost of the self-serve model. Imagine my delight to see a recently published study conducted by TSIA and Coveo supporting Customer Relationship Metrics’ conclusion. Among the study’s findings was the fact that while voice and face-to-face contact are the most expensive ways to support customers, they also result in the greatest customer satisfaction.

I realize this study is not going to make anyone shut down their email, web chat and self-serve programs, so instead this three-part blog series is designed to help you make these types of interactions better for your customers and provide you with greater customer insights into the customer experience results for the various channels handled in your call center.

Continue reading

Do you need Steve Jobs to do your Call Center Analytics?

The success of any Business Intelligence project is contingent upon people, not technology. Analysts and end users must work in concert to ask a concise question, identify the data available to answer that question and, validate interpretation of analytic outputs in context of the business environment.   From there, the subject-matter experts (statisticians, data analysts, data miners, etc.) must be allowed the freedom to draw upon their breadth of knowledge and experience to select the best methodology for the job.

I cannot tell you how many times a business unit manager has come up to me and with all of the confidence of a just-learned-to-stand toddler and declared “I need a model!”  “Really?” I respond.  What type of model?  Logistic?  Linear?  What kind of data do you have for me to work with?, and a plethora of other rather technical questions.  My point is that predictive models have been used quite successfully in marketing for many years.  In a business environment where “half of the organizations surveyed do not take advantage of analytics to help them target, service, or interact with customers” according to Accenture’s Customer Analytics survey, predictive models have gained the esteem and notoriety akin to Steve Jobs.

Continue reading

Your contact center agents have an expiration date

Contact Center Agent burnout ,your agents have an expiration dateDid you ever think your agents have an expiration date? Many of us in call centers chase the holy grail of higher agent tenure, assuming that agents will use the additional knowledge and experience attained through tenure to better serve customers. The unfortunate reality, according to customers, the more tenured agents don’t deliver a better customer experience; they deliver a worse one, despite being armed with all of the knowledge and skills that “rookies” are thought to be acquiring.  And, that customer experience continues to diminish the longer your agents languish in your call center.

During our recent Customer Insights to Action meeting (a quarterly meeting open to all of our clients), Customer Relationship Metrics updated a case study of this same subject.  In the originial analysis of the customer experience we found that agent performance peaked in month 11.  At the time, we hypothesized that the peak of this performance bell curve would vary based on industry, management style, new-hire training, company culture and a number of other variables. What we found just recently is that peak service performance is rated by customers when the agents’ tenure is between 9 and 11 months.

Many agents have an expiration date that’s shared

While this finding is clearly interesting, it’s really bad news for many organizations that have tenured agents that far exceed the peak of the performance curve (including company 1 with an average agent tenure of 3.56 years).  Every day, week and month that an agent remains employed past their peak, they’re putting your most important relationships in jeopardy!!

Contact Center Agent turnover ,your agents have an expiration dateAgents have an expiration date? How’s that good news?

So, agents have an expiration date. The good news is that there are many ways to extend your agents’ value past the expiration date. All methods to extend expiration require that you stop the focus on the arbitrary metric of tenure and shift your attention to building agent engagement. Engaged agents not only remain in your employ longer, but they also contribute positively to the customer experience because they are emotionally invested in your organization and the relationships that help it flourish. Below are a few suggestions to extend your agent expiration date:

  • Promote agents reaching the end of their peak performance curve into escalation agents – Every call center receives those calls from unusually irate customers who refuse to talk to a “regular” agent and demand the attention of a higher-up. Some agents stagnate because they’re bored. Assign them to handle these escalated calls and watch them step up to the challenge!
  • Have agents reaching the end of their peak performance curve respond to service alerts – Every survey project CRM launches includes (customized) triggers that immediately alert the call center management staff to an at-risk customer experience. Make your more tenured agents responsible for researching these case histories and following up with these customers. You’ll be amazed at how quickly they integrate this new information into their call handling and how vocal they become in communicating what they’ve learned to their peers on the call center floor.
  • Be supportive of the fact that for some agents, the call center is only a launching pad – While this may be frustrating to many of us who invest a fair amount of time training and education agents, the fact is that not everyone wants to remain in the call center for eternity. Support your agents in growing towards their career vision and you’ll earn their loyalty and best effort while they’re with you.
  • Show agents how they contribute to the larger company picture – Recently I had the opportunity to visit one of our business partners and present to them how their efforts to deliver the very best customer experience positively impacted the entire organization. It had taken quite a number of years for the organization’s upper management to be open to this type of claim. Immediately after that meeting ended, the call center management team had a luncheon scheduled with all of the call center agents and presented our findings. Understanding how their daily efforts contribute to the bottom line of the organization and how their hard work changed the way the call center was viewed by the organization at large was exactly the boost that some agents needed to continue improving! And much love could be felt for their fearless call center manager who fought for their efforts to be recognized.
  • Create a mechanism for upward flow of feedback – Create a mechanism by which employees can openly (and without fear of repercussions) provide feedback to management and about management. At a prior employer our culture was so open that I regularly had agents dropping by my office to provide feedback about my management staff … and me. If you do not have a company culture in which this type of open interaction is likely, consider having quarterly feedback sessions in which an independent moderator (perhaps an individual from another part of the organization) collects and disseminates agent feedback or the responsibility of reporting on the outcome of these sessions is rotated among agents.

free contact center resourcesWho knew contact center agents have an expiration date? This case study represents the capabilities of Customer Relationship Metrics to find things that are otherwise undetectable in contact centers. Our services entail including ourselves in the world of our clients through an immersion process. This process includes data collection, observation, collaboration, strategy analysis, and predictive modeling. Our survey and unstructured data analysis programs go beyond collecting data and compiling a report to post. We detect, uncover, recommend, and enable action. Contact us for a complimentary consult so we can determine how you can gain value from the otherwise undetectable.

Are your post-call surveys used for business intelligence or to beat up call center agents?

Are your post-call surveys considered to be business intelligence?  Let’s be honest, are we always ready for that honesty?  Or are you asking in such a way that you only get positive comments? Or to just beat up call center agents? 

What we’ve found is that most programs are destructive or met with apathy. The objective you want is to encourage the good, the bad and the ugly – this strategy will help you react to and solve issues with  your products and services.  For instance, a recent case was the discovery of a supply issue – not enough product to go around – which in turn was causing some negative online conversation surrounding the product.  The call center knew about the problem long before consumers started tweeting about it, but did not have a good process to disseminate the information.  Given how quickly people can express themselves online these days it becomes even more important to have a proactive process in place to not only deal with issues, but to equip your agents with the tools they need to address the consumers concerns about supply on the phone. Our client certainly does this now.  You hear, and you need to share your customer intelligence: Continue reading

Customer feedback that makes you go “hmmm?”

During the customer experience measurement process, we asked customers to give an explanation as to why they scored an agent or the company the way they did.  These open-ended comments provide true insight into the problem or pain the customer has experienced.  Some customer comments leave us scratching our heads saying, “did he just say that?”  When we look at the “why” versus the “what” in these comments, sometimes these extraneous influences appear to have no rhyme or reason.  It could be any number of things such as frustrations with a factor in the customer’s life, health issues, family problems or just plain loneliness.  These comments are some of the ones that certainly made us raise an eyebrow or two, a perfect way to start a Monday.

Continue reading

Communicating the Results – Part 3 of a 4 Part Series: Supervisors and Agents

Over the last two weeks, I’ve covered how to communicate the results of your call center to both the Executive Management and to the Operations Team. Today we will turn our focus to the Supervisors and Agents in your call center.  Again, it’s important that each group gets the proper information to perform to the best of its ability.

Reports for Supervisors and Agents

Managing a team of contact center agents requires a combination of quantitative and qualitative customer feedback to measure, track, compare and motivate. And the shorter the lag time between a call and the availability of the customer’s feedback, the better!  

The availability of real-time data expedites a supervisor’s ability to identify trends in performance, provide feedback to agents and conduct service recoveries for defective service experiences. 

Customer Comment Report

Customers are in a unique position to motivate agents through their positive comments. The knowledge that a customer was impacted by a service experience to the point where he/she would take the time to make it known, is often more effective than any praise given by a peer or supervisor. Conversely, a customer’s comment could also shed light on sub-par scores. It is the complement of this qualitative feedback to the quantitative data that allows for a holistic approach to the customer experience.

Since customers are not able to see and have never met the agents they interact with, they create a mental image of what this person must be like using expectations and prior experience as a guide. Consumers categorize others because it makes their lives simpler and provides a feeling of control. Callers, therefore, will know (or think they know) how to approach a situation in which they are dealing with people they don’t know because they have already categorized it. They begin with a prototype in mind of what the agent should be like and how the interaction should go. When an agent fits the prototype, and even goes beyond the customer’s expectations, then Wow Factor feedback is collected: 

  • “The young man who helped me was courteous and quite knowledgeable of the ways of the company. In fact, if I ever needed anything in the future I would be tempted to call back and repeatedly call back until I received him. I would even like to have him over for dinner. Maybe even some beer and watch some baseball.”
  • “I found him to be intelligent, quick on the uptake, very pleasant, agreeable and had a sense of humor. That’s rare among bankers.”
  • “Sharon was outstanding. She deserves some additional compensation. This is not one of her relatives. Thank you.”
  • “I was very impressed with the service that I got today over the phone. There is no way we will ever leave you unless somebody really, really screws something up bad.”

This also takes a negative direction when the agent does not fit into the prototype.

  • “Your customer service needs to do customer service. When they can’t help you or refuse to help you, they follow up with the question: What more can they do to help? Well they haven’t done anything to begin with. A bunch of Cretins.”
  • “This rep treated me like I was stupid. I didn’t appreciate it. You should never treat a customer like they are stupid, even if they are.”
  • “Your reps are the least informed, ill-equipped and most ignorant people I’ve ever run into. This bank is the perfect advertisement for any other bank.”

We have all suspected that satisfaction and/or dissatisfaction in one’s life role may be transferred into other life roles, like an agent taking a bad day out on a customer and vice versa. Frustration or dissatisfaction with a product/service may actually be the result of the consumer feeling frustrated in life roles other than the consumer role. Agents must not only manage the delivery, they must also detect and manage the issue for the caller — all with the company’s best interest at the forefront.

  • “Thank you for making my depressing life a little bit better with the service that you have given me.”
  • “The representative was very courteous and kind. I appreciate her. The only problem I’ve ever had is that our former banker had an affair with my husband. We divorced and now they’re married. So, in that area I’m not satisfied with the services the bank has provided.”

Customers also expect to be treated in a manner consistent with their role as the customer in the interaction. Research shows that consumers evaluate service institutions and personnel positively when the personnel treat them as individuals who have specific needs to be met by the service interaction. If agents do not, customers will let you know.

  • “The representative was very efficient and this is true to your company’s form.  Every time that I have called customer service I have gotten excellent service and today was no different.”
  • “It took four phone calls to get a pink slip. I’ve paid the car off. I deserve the pink slip. The first call I made said I would get it in 10 days; it’s now been 6 weeks.  This phone call said it was mailed yesterday. Somehow I doubt that, but we’ll see.  If I don’t get it, I’ll call you back. I don’t mind. I’m retired. I’ve got nothing to do but call you folks until I get what I want.”
  • “You can return my calls, which you don’t do. I’ve asked to talk to a supervisor a few times. I haven’t gotten a supervisor to give me a call so why should you ask me to waste my time on this survey when you won’t have a supervisor call me. I think that’s pretty rude. You can call me at XXX-XXX-XXXX. I doubt I’ll hear from you but it would be really nice if I did and it would make my day and might change my perception on how I’ve been treated.”

Why be concerned with the research behind customer comments? Well, it’s a component of increasing customer satisfaction, loyalty and creating a positive word of mouth. If you can better understand your customers, you can create a better environment for the service interaction. You can also educate your agents and use this information as a training opportunity for them to garner a better understanding of consumer comments. After all, customers do say the darnest things.

Real-Time Performance Dashboards

By viewing the real-time dashboard below, a supervisor could quickly surmise that today’s call resolution and call satisfaction statistics are trending below the month’s average. The supervisor now has a goal for the day, as well as a minute-by-minute indicator of his/her success in impacting these key metrics.

Real-Time Alerts

The availability of real-time data also allows supervisors the opportunity to recover customers who have had a dissatisfying service experience. Although the caller may not have been satisfied with the service experience in general, satisfaction with the service recovery experience is significantly related to their intention to repurchase [1].  If there is no process for service recovery, the relationships of 15 percent of your customers are at risk (if not 15 percent, insert the percentage of your callers who would rate the experience as poor). Customers who have had a service failure that was resolved quickly and properly are more loyal to a company than are customers who have never had a service failure — significantly more loyal [2, 3]. The key to success is a quick resolution. How quickly do you initiate a recovery plan after the dissatisfying experience? Is there a service recovery plan in operation?

Many call centers have inadequate processes in place to capture, nevermind address, a failure in customer experiences. The process, and its timeliness, leaves too many customer relationships exposed. Service recovery should protect the exposed asset during the call experience (whether that exposure was a direct result of Agent behavior or caused by the organization’s process). Is recovery of the relationship even possible? It is unlikely if you do not know about it, as only about 5 percent to 10 percent of customers choose to complain to the company [4]. More likely, it results in negative word of mouth (market damage) and the discontinued use of your products and services. A lost customer is an easy, low-cost-to-acquire new customer for a competitor AND is customer value lost to your organization.

Components that facilitate timely notification of dissatisfaction enhance service recovery. By instituting a real-time survey, the amount of saved customer relationships will increase not only customer satisfaction, but have a direct link to an increase in customer loyalty. An immediate alert of a failed experience tells an important story. Is there a common issue with a particular agent? Ineffective behavior can be quickly addressed, minimizing the ongoing negative impact for the agent and the organization. Is there a common process issue? Caller dissatisfaction may be rooted in a new policy or procedure. Identify and change the procedure or identify and provide an effective Agent response to common aspects of customer dissatisfaction. Extrapolate the findings from the service recovery group and leverage this within your organization. 

A real-time alert feature delivers significant value by proactively responding to callers who experienced difficulty with an interaction and are leaving the interaction dissatisfied. The EQM program contains a systematic approach to capture the reason for the customer-defined failure (people, process, or technology classifications) to highlight patterns for the organization. Without a framework, proving the effect of a process issue, for example, is more difficult.

We included the dashboard with this post because it communicates to center management the critical elements that must be chronically monitored and managed. 

To wrap up this series next week, we’ll recap these three groups and introduce some interesting powers of persuasion to help you when communicating the results.

 

This post is part of the book, “Survey Pain Relief.”  Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions.  Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs. 

“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice.  These programs are all too often accepted as valid by the unskilled and unknowing.  Inside is your chance to gain knowledge and not be a victim of being lead by the blind.  For more information http://www.surveypainrelief.com/

 

References

1.  Boshoff (1999). Journal of Service Research, 1, (1), p 236-249

2.  Blodgett, Wakefield and Barnes (1995). Journal of Services Marketing, 9, (4), p 31-42

3.  Smith and Bolton (1998). Journal of Service Research, 1, (1),  p 65-81

4.  Tax and Brown (1998). Sloan Management Review, 40, (1), p 75-88

Communicating the Results – Part 2 of a 4 Part Series: Operations Team

Last week I talked about how to communicate the results of your External Quality Monitoring (EQM) analytics to Executive Management.  In talking about, “know your audience” I was reminded of a trip to Greece.  Today as we turn our focus to the Operations Team, I recall a much more recent story.  In fact, this happened two days ago while I was out shopping. 

For an upcoming wedding, I was in search for simple black earrings.  It was later in the evening when I entered the department store alone (which I like since there are less crowds, and I can be in and out) in search for my quick purchase.  Now, typically in your big brand name department stores, there is NO ONE around to help you other than the required minimum Sales Associates on the floor mainly to man the fort behind the cash register.  That night, there was a young lady in the jewelry department who was straightening out the display cases in prep for closing time.  She asked if I needed any help.  Shocked to hear such a thing from an employee in this particular store, I decided to take her up on the offer.  I tell her that I am looking for simple black earrings to go with my dress.  Well, 30 minutes later, I was back to where I started…looking for simple black earrings by myself.  The sales associate showed me everything from silver sparkling, dangling earrings, to red hoops (“that really pop!”) to huge black flower earrings.  All of these I’m sure would look great on girls in her age group, but I was very specific about what I needed for this occasion.  She clearly did not listen and did not know her audience.

 

Reports for the Operations Team

The packaging of the External Quality Monitoring program is an important marketing tool for the contact center and the operational team responsible for its performance. A critical component of the reports is that the research has been executed correctly and the validity of the results is certain. This data inform operational decisions, populates performance management systems and calculates incentives/performance pay. The data are used along with the internal call monitoring data and the operational metrics to provide an accurate assessment of the service function, and to identify directives for each agent and each team.

In addition to providing a snapshot of the service the contact center is providing to customers, operational reports will likely focus on two aspects: location-specific analysis over time and location comparisons. Location-specific data compares the period just past to prior periods and perhaps even to the last year. Such a comparative analysis allows operational management to track changes over time, revealing which interventions yielded the most positive results for a single location. 

From a comparison perspective, it is useful for specific locations to be able to rate their performance against their peer locations, as well as the contact center effort as a whole. It can be a source of pride for the successful locations and a spur to more intense efforts for those that lag the whole. Continue to make use of multiple formats for the presentation of results. Remember, some readers are more verbal, visual, or numerical than others.

The highest tier of the operations-level report is a summary of all feedback collected for all of the contact center locations and departments. This view of the data provides a status report of the past period’s performance, which can be compared to prior periods. The table below provides aggregate data regarding customer ratings on each question within the survey. The data found in this table presents % Delight (in this example scores of 8 or 9 on the 1-9 response scale used) and mean score, extrapolated to a 100-point scale. 

On this scale, the lowest service rating (a rating of 1) is equivalent to zero points on the 100-point scale, and the highest rating (a rating of 9) is equivalent to 100 points with the remaining point distributed across the scale.

You can classify customer ratings on key questions into loyalty categories to more closely examine the relationship between customer satisfaction and loyalty. Combine key loyalty questions, including customer satisfaction with the company, the representative, and the call itself to create a customer loyalty index (CLI). Customer satisfaction directly relates to long-term customer loyalty that ultimately contributes to shareholder wealth. Trend analysis of CLI is critical to determining if change initiatives are being recognized by customers, reflected in service delivery evaluations, and positively impacting return on investment (ROI).

Four categories are represented in the CLI chart:

  • Customer Delight (green in CLI charts). The top two categories on the scale (8 & 9) represent customers that are delighted with your company/service/agents. These customers are key company assets that have been preserved through the service experience. Their high scores provide assurance that they will stay with your company, provided you maintain a consistent level of service.

 

  • Satisfied Indifference (blue in CLI charts). These customers (categories 5-7) represent the primary focus for the next evaluation period. They are generally satisfied, but cannot be counted in the completely loyal category. If presented with an opportunity, these customers may select a different provider. As such, the goal is to move these customers into the delighted category.

 

  • Dissatisfied but Recoverable (yellow in CLI charts). Customers in this category (3 & 4) did not have a positive experience and would likely switch, but you may be able to reach out and change their perception by correcting the service experience. As such, timely and informed follow-up is the key to success with this category of customers. These customers should be the secondary focal group.

 

  • Customer Defection (red in CLI charts). These customers in categories 1 & 2 were very dissatisfied and are most likely to leave your company for an alternative. 

 

The examination of performance means and percentages for key survey questions as describe thus far is important, but contributes only part of the information regarding the callers’ evaluation of service provided by your centers. The survey data was collected to achieve the two purposes.  One, how are we doing and, two, what is critical to the quality of the service experience?

  1. to provide a caller evaluation of the attributes of service, called a performance measure or the mean level of performance, and,
  2. to enable the analytics to identify the service attributes that statistically impact caller satisfaction, essentially the drivers of caller satisfaction.

 

The performance means and calculated impact values of each service attribute enable the quantitative identification of areas in which service performance may be below an acceptable level and the resulting impact on satisfaction is high. The process of calculating impact values is a little more intricate than the process of calculating mean performance scores. The regression analysis from the caller satisfaction data computes the impact values and identifies the level of impact each attribute has on overall satisfaction. This allows supervisors and managers to concentrate on the areas that are most important to customers. 

The combined aspects of service produce an effect that is perceived by the customer. When determining improvement issues, you should consider how the attributes interact, rather than a static attribute-by-attribute evaluation. Therefore, use a regression model to examine the callers’ overall perception of the service received during the call. An example model, in words, is:

The rating of overall satisfaction with the agent (Q1) is a function of how quickly the agent understood the reason for the call (Q2), how professional the agent was during the call (Q3), the agent’s knowledge of products and services (Q4), the agent providing a clear and complete answer (Q5), the confidence in the information provided by the agent (Q6), and being treated as a valued customer (Q7).

That model, mathematically, is:

                  Q1 = f (Q2, Q3, Q4, Q5, Q6, Q7)

By running the regression analysis, you can determine which attributes impact caller satisfaction the most. Combine these drivers of satisfaction with the measures of performance to present a complete picture, as shown below. 

Based on the multivariate regression model, “conveying confidence in the response given,” “taking ownership for resolving the problem/issue,” and “quickly understanding reason for the call” are the most important drivers of satisfaction with the representative for this set of data. The drivers-of-satisfaction results (presented with the impact/performance chart as above) will vary for different sets of data. One team compared to another may have very different strengths and weaknesses. The power of such analysis is that it narrows the scope of focal areas to a manageable number. The example below shows a drivers-of-satisfaction analysis for two teams. These teams differed only in their leadership and membership–average tenure.  Customer types served and location were both similar.

Team 1

Impact                                      Performance

 

 

 

 

 

 

 

Team 2

Impact                           Performance

Despite the similarities between these two teams, each team generated different performance means and drastically different impact values.

Comparative analysis of different supervisors, teams, locations and departments often reveals service segments that should be emulated and other segments in need of intervention. Based on the table below, only one of the four teams that represent the largest percentage of completed surveys is a top-performing team (based on mean survey scores). An analysis of the strengths, skills and approach taken by members of Team B could aid members of the remaining teams in improving their own performance levels. Conversely, analyzing the lower performing teams could identify the unique challenges they face in servicing customers.

Note:  C1-C5 would be general satisfaction criteria; Q1-Q6 would be Agent attributes.

In the example below, we analyzed the characteristics of a low-performing team in order to design corrective training. A drivers-of-satisfaction analysis revealed that “knowledge of the company’s products” and “completeness of responses” were the behaviors that had the greatest impact on the customer’s perception of the call. We plotted individual performance on these two key behaviors to create a visual profile of the team’s makeup. The team’s mean scores (on a 1-9 scale) on these two key behaviors divide the scatter plot below into four quartiles. Each quartile represents a unique agent profile, with a known set of strengths and weaknesses. For example, quadrant III, in the lower left side of the scatter plot, represents the call center’s risk. These agents are below-average performers on both of the key behaviors that have the greatest impact on customer perception of the service experience. While this quadrant represents a comparatively low percentage of this team’s membership, management must conduct an assessment of skill and desire to improve in a timely manner in order to minimize the risk to the contact center and, ultimately, the company’s revenue stream.

Let’s look at an instance where agent tenure was selected as the differentiating variable among agents. A great deal of time and energy is spent in the contact center industry empowering and developing agents in hopes of ensuring longevity. These actions do not always guarantee that the most tenured agents will be the best performing agents, as was the case in the example below.

The figure above clearly places peak agent performance at approximately 10 months of tenure. Introducing a performance intervention prior to month 10 of an agent’s tenure can extend the peak performance level. 

Call resolution plays a key role in driving customer satisfaction. Significant differences in satisfaction scores exist between customers whose calls are brought to resolution and those whose problems/inquiries require follow-up. The table below exemplifies exactly why call resolution is such a key metric. 

Repeat calls also have a dramatic impact on customer satisfaction. Repeated calls by customers are not only costly from the perspective of agent talk time, but also have a fairly severe impact on customer satisfaction with the company, the call and the agent.

With a representative sample, we can extrapolate the percentage of repeat calls to all calls taken during the month, in order to calculate the operational cost of repeat calls. The average cost per call is multiplied by the number of repeat calls (as determined by percentages below) for the second, third, etc., calls required. Keep in mind that the indirect cost is also a factor as it is associated with the significantly decreased satisfaction as shown in the section above.

An analysis of company-level data in this manner provides members of the operational team with a solid understanding of current customer satisfaction levels, the drivers of customer satisfaction and areas in need of improvement. However, examining the data longitudinally reveals performance trends and the impact of interventions. 

The figure below is a control chart for call resolution. Control charts are often used in Six Sigma to differentiate between normal and abnormal process variation. Each point in the control chart below represents weekly performance on the key metric call resolution. The blue horizontal lines represent the upper (UCL) and (LCL) lower control limits for this metric, based on mean performance and standard deviation of weekly performance around this mean. Any point that resides either below the LCL or above the UCL indicates that the call resolution process is out of control, requiring an intervention. 

Performance above the UCL indicates that agents are resolving an abnormally high percentage of calls. On the surface this may seem desirable, but further analysis reveals that resolution was gained at the expense of customer satisfaction. Performance below the LCL could have severe implications on customer satisfaction and contact center costs.

Next in the series, we turn our focus towards the Supervisors and Agents. 

This post is part of the book, “Survey Pain Relief.”  Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions.  Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs. 

“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice.  These programs are all too often accepted as valid by the unskilled and unknowing.  Inside is your chance to gain knowledge and not be a victim of being lead by the blind.  For more information http://www.surveypainrelief.com/

 

 

Communicating the Results – Part 1 of a 4 Part Series: Executive Management

A few years ago, my husband and I took a trip to Greece. We wanted to explore the countryside for a few days and decided to rent a car in Athens. At the reservation desk, the nice gentleman at the counter handed me a road map. Eager to get on our way, I thanked him and put the map in my bag, got into the car and away we went. As my husband was driving, I opened the map to take a look at where we were heading. It’s in English. I took a look out the window. The signs are in Greek. I could not translate the symbols in the Greek words to what I was reading on the map. As they say, “It was all Greek to me” and out the window the map went (not literally out the window). While it was considerate of the car rental representative to hand me a map in my own language, it was a totally useless tool regarding it’s intended purpose. In the end, he truly did not know what I needed.

 

Know Your Audience

We all know that in every situation, personal or corporate, you need to know your audience. This is not breaking news. However, do you know what information your audience truly needs? What does your Executive Team need to know to make strategic decisions versus what do your agents need to know to perform consistently well? Therein always lays the challenge. Analysis must be relevant to the decisions that the audience must make. Actionable data provides answers, direction and purpose.

Communicating the results of your External Quality Monitoring (EQM) research effort is a key but sometimes overlooked step in the process. Doing this right can be the difference between renewed funding and cutbacks that cannot be supported; the difference between usable feedback and mass confusion; the difference between having an engaged, motivated group dedicated to the Rally Cry for the customer and just getting by with handling customer interactions.

The audiences that are most likely to require feedback from the EQM program:

  1. Executive management,
  2. Operational management of each contact center,
  3. Teams and individual agents.

Each of these audiences will need analysis on different data and graphic or tabular presentations of the data. Additionally, consider when oral presentations will be needed and prepare for those in conjunction with the written reports. Never miss an opportunity to tout the advances made by the program and the benefits achieved.

This 4-Part series focuses on communicating the results that are needed to the appropriate teams and today, we place our focus on Executive Management.

 

Reports for Executive Management

As a critical function within the enterprise, any report to executive management must summarize the contribution achieved for the investment made. Beyond a mere presentation of high-level numeric results, an executive-level report should include a summary of the mission, the investment in customer relationship management, the contribution to customer loyalty and sales, a summary of the product, services or process issues identified for enhancement and the results of those initiatives (beyond the contact center).

Looking first and foremost at executive management needs, a first-rate presentation is paramount. Looks matter a good deal at this level, so do not skimp on color graphs and high-quality paper.

In general, such a presentation will require an executive summary, a response questionnaire, a narrative, including tables and graphs, as appropriate, and a summary/conclusion section.

1. Executive Summary: Give an overview briefly stating the purpose and results of the research. The executive summary must be parsimonious. It should be brief and cogent, wasting no space or words. Make it as objective and clean as possible.

2. Response Questionnaire: Include the survey instruments next. Evaluators of the research need to understand exactly what was asked. It is generally a good idea to include an accounting of the responses to the questions. Those will be readily available from the frequency tables in the analysis printout. You can include both the questionnaire and the responses in this step, hence the name “response questionnaire.” It is also possible to include the means for each question if that information will be meaningful to the reader. Some people are sensitive to the order in which the questions are asked, so it is wise to use that order in this section and indicate this clearly. Avoid needless discussion on question order to save time for the important business of selling executives on the value of the research initiative. Below is an example of a response questionnaire.

3. Narrative: In the narrative, state the purpose at the beginning and the results at the end, but, in between, tell how the research was conducted: who, what, when, where, why, how. Expect to put every piece of important information in the narrative at least twice: once verbally, and at least once more in a table, graph or both. Remember that people do not process information the same way. Some are verbally oriented; others are visual learners, so graphs will be easier to absorb. Still others relate best to numbers — and a table that would put most of us to sleep will sing out loud and clear to them. Include a plan for how the results will be used and outline additional resources needed, including budget, time, space, etc. If the proposed plan can reassign already-available resources without incurring additional expense, then all the better. Put that information in and anything else that supports the argument. Sample components of the narrative section of an executive committee report are shown below.

 

Example 1

This past quarter, the contact center experienced an unexpected 40 percent increase in call volume, due in large part to the promotional campaign launched in early July. The impact of the large increase in call volume had an adverse effect on operational metrics such as average speed of answer (ASA), average wait time and service level, as well as customer satisfaction levels. As a result, the contact center fell below performance goals on External Quality Monitoring customer satisfaction metrics for the first time over the last five fiscal quarters.

 

Example 2

In an average month, 100,000 customers call into our phone support center.  At an average monthly value of $50 per customer, our phone support center has the potential to impact approximately $5 million in revenue. This past year, our phone support center reached performance goals (customer delight of 60 percent, and customer dissatisfaction of 5 percent), representing a 15 percent improvement in performance over the prior year. The result of this improvement is the protection of $750,000 of company revenue. In comparing this figure to the operating costs for this same period, a return on investment figure of 138 percent results.

 

Example 3

During the first quarter of this year, our contact center once again exceeded the performance of our competitors in the industry. Our domestic location (location 2) contributed to this positive standing, while our offshore location (location 1) continued to struggle to meet performance goals.

 

4. Summary: The summary/conclusion should again state the purpose of the research, the results of the research, the uses to which the results can be put, and subsequent plans for implementation. This should be much shorter than the narrative, and should highlight what is successful and useful. It is mission critical to aid executives in understanding the research, so make it clear.

The outlined report is intentionally redundant. It is the writer’s job to make it seem less so. Making the same points repeatedly is necessary since this may be the only opportunity to win the case. The idea is to repeat the major points but with more detail from executive summary through narrative, and then summarize again in the conclusion, leaving the reader no choice but to see the absolute reasonableness of the conclusions. A report should not leave the reader with questions. Make all the necessary information available, place that information into multiple formats, polish the language and present it in an appealing report. Again, it is critical to hold this job to the highest standard. Allocate sufficient time for the best report writer and editor available to do this job.

Next in the series, we turn our focus towards the Operations team. 

~ Dr. Jodie Monger, President

This post is part of the book, “Survey Pain Relief.”  Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions.  Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs. 

“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice.  These programs are all too often accepted as valid by the unskilled and unknowing.  Inside is your chance to gain knowledge and not be a victim of being lead by the blind.  For more information http://www.surveypainrelief.com/

The Mounting Data Crush

Typically, our “Knuggets and Knuckleheads” are filled with customer comments and customer feedback that make you laugh or perhaps give you insight as to some pains and barriers your customers are experiencing FROM your organization.  To shake it up, this Tuesday (instead of Monday) we are bringing you an organizational post that showcases the ‘knucklehead’ practices taking place INSIDE organizations today.  This video not only shows the nuances of the 2-3 days organizations spend each month pulling in multiple data sources for post-mortem reporting, but also gives a ‘knugget’ of wisdom on how to stop the madness and relieve the pain.

Happy Tuesday!

~ Dr. Jodie Monger, President

Experts Read Here!
"CRM Metrics blog has lots of great content...I had all but given up on reading blogs but found this one to be full of insights and fresh ideas/perspectives."
Joe Outlaw, Principal Contact Center Analyst, Frost & Sullivan
Join Joe and Others

Join the Crowd

Resource Library

Watch Latest Videos