How do we ensure that customer experience results are a profitable business process in the call center and elsewhere in the organization? To increase the value of the initiative, be certain that the research is done the right way, and not only done for the sake of surveying customers. Note that customer feedback results will be used by colleagues regardless of the number of caveats listed in the footnotes, so be diligent in providing valid and credible customer intelligence from your contact center. The consequences of a poor measurement program and inaccurate reporting can have profound and far-reaching effects on your credibility in the organization.
Put another way, are you guilty of survey malpractice by giving your company faulty information based on inadequate research methods and interpretations?
Malpractice is a harsh word — it directly implies professional malfeasance through negligence, ignorance or intent. Doctors and other professionals carry insurance for malpractice in the event that a patient or client perceives a lack of professional competence. For contact center professionals and other managers, there is no malpractice insurance to fall back on for acts of professional malfeasance, whether they’re intentional or not. Of course, it is much more likely that one would be fired than sued for bad acts, but that offers little comfort.
Never put yourself in a position where your competence can be called into question. That’s why so many call center managers are “skating on thin ice” when it comes to their customer satisfaction measurements: there are demonstrable failings with many of the typical practices used by call center managers. By definition, an ineffective measurement program generates errors from negligence, ignorance and/or intentional wrongdoing. You have a fiduciary responsibility to your company — and recommendations made based on erroneous customer data do, indeed, meet the definition of malpractice.
Measurement programs must meet certain scientific criteria to be statistically valid with an acceptable confidence level and level of precision or tolerated error. Without these considerations, you are guilty of survey malpractice. Defending your program with statements like, “it has always been done this way” or “we were told to do a survey” is not sufficient. Research guidelines adhered to in academia apply to the business world, as well. A deficient survey yields inaccurate data and results in invalid conclusions no matter who conducts it. Unnecessary pain and expense are the natural outgrowths of such errors of judgment.
To maximize the return on investment (ROI) for the EQM customer measurement program, and to ensure that the program has credibility, install the science before collecting the data. Make sure that the initial program setup is comprehensive. If there is no research expert on staff, then hire this out to a well-credentialed expert. The alternative is to train someone in the science around creating and interpreting the gap variable from a delayed measurement. Or better still; engage a qualified expert to design a program to measure customer satisfaction immediately after the contact center interaction.
Before assuming that survey malpractice does not or will not apply to your program, consider the following tell-tale signs of errors and biases, as they are critical to a good program.
1. Measuring too many things. Your survey of a five-minute call center service experience takes the customer 15 minutes to complete and includes 40 questions. While everyone in your organization has a need for customer intelligence, you should not be fielding only one survey to get all of the answers.
Should the call center be measuring satisfaction with the in-home repair service, the accounting and invoicing process, the latest marketing campaign, or the distribution network? Certainly input on these processes is necessary, but don’t try to get it all on a single survey.
2. Not measuring enough things. An overall satisfaction question and a question about agent courtesy do not make a valid survey. Without a robust set of measurement constructs, answers to questions will not be found. Three or four questions will not facilitate a change in a management process; nor will they enable effective agent coaching or be considered a valid measure to include in an incentive or performance plan.
3. Measuring questions with an unreliable scale. In school, everyone agreed on what tests scores meant: 95 was an A, 85 was a B, and 75 was a C. Everything in between has its own mark associated with it, as well. Yet, when it comes to service measurement, we tend to give customers limited responses. What do the categories excellent, good, fair and poor really mean? Offering limited response options does not permit robust analysis, and statistical analysis is often applied incorrectly. In addition, using a categorical scale or a scale that is too small (like many typical 5-point survey questions) is not adequate for the evaluation of service delivery.
4. Measuring the wrong things or the right things wrong. Surveys should not be designed to tell you what you want to hear, but rather what you need to hear. Constructs that are measured should have a purpose in the overall measurement plan. Each item should have a definitive plan for use within the evaluation process. The right things to measure will focus on several overall company measures that affect your center (or your center’s value statement to the organization), the agents and issue/problem resolution.
5. Asking for an evaluation after memory has degraded. When we think about time, 24 to 48 hours doesn’t seem that long. But when you’re measuring customer satisfaction with your service, it’s the difference between an accurate evaluation and a flawed one. Do you remember exactly how you felt after you called your telephone company about an issue? Could you accurately rate that particular experience 48 hours later, after other calls to the same company or other companies have been made? That’s what you’re asking your customers to do when you delay measurement. It opens the door to inaccurate reporting and compromised decision-making, and is also an unfair evaluation of your agents.
Conducting follow-up phone calls to gather feedback about the center’s performance is a common pitfall. While the research methodology certainly should have its place in the company’s research portfolio, it’s less effective than using point-of-service, real-time customer evaluations.
Mail and phone surveys are useful for research projects that are not tactical in nature, but rather focused on the general relationship, product feature, additional options, color, etc.
6. Wiggle room via correction factors. If you’re using correction factors to account for issues in the data or to placate agents or the management team, some aspect of the survey design is flawed. A common adjustment is to collect 11 survey evaluations per agent and delete everyone’s lowest score. However, with a valid measurement that includes numeric scores, as well as explanations for scores and a rigorous quality control process, adjustments in the final scores will not be necessary. Making excuses for the results or allowing holes to be poked in the effort diminishes and undermines the effectiveness of the program, and highlights an opening for survey malpractice claims.
7. Accuracy and credibility of service providers and product vendors. As with any technology or service, the user assumes responsibility for applying the correct tool, or applying the tool correctly.
There are plenty of home-grown or vendor-supplied tools to field a survey, but, again, if you do not apply the functionality correctly, you will be responsible for the error. Keep in mind that some service providers are only interested in selling you something that fits into their cookie-cutter approach, and it will not be customized to your specific requirements.
~ Dr. Jodie Monger, President
This post is part of the book, “Survey Pain Relief.” Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions. Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs.
“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice. These programs are all too often accepted as valid by the unskilled and unknowing. Inside is your chance to gain knowledge and not be a victim of being lead by the blind. For more information http://www.surveypainrelief.com/
“Survey” Photo Credit: The University of York www.york.ac.uk/…/training/gtu/staff/cros.htm
- Where Are You on The Spectrum of Agent Performance - July 27, 2017
- How many things should be measured on my Quality Monitoring Form? - May 17, 2017
- Best Practices for your Quality Monitoring Form - May 12, 2017
- What is the best scale for customer satisfaction surveys? - May 8, 2017
- How to take action with Call Center Analytics - May 1, 2017
- How many calls should agents handle in an hour? - April 19, 2017
- You are Doing First Call Resolution Wrong - March 31, 2017
- For People on the Verge of Tripping on the self-service Line - December 6, 2016
- Justin Robbins CCDemo interview takes me back to Kindergarten - November 4, 2016
- How many chat sessions can agents handle? - September 9, 2016