customer intelligence

Are bad brand licensing agreements souring your brand image?

How many of you read the owner’s manual before trying out your new hedge-trimmer/Blu-Ray player/ceiling fan/blender?  How many of you have ever read an owner’s manual?  I have files full of pristine manuals just in case I need them one day.  I’m pretty sure somewhere in those files is an owner’s manual for a double tape-deck, purchased back when they were state-of-the-art.  When we want service, product repair, replacement parts, or need to ask questions about the functionality of a product, we don’t go digging through paper; we go to the Web.  But what happens if the brand name on the product is not the company who manufactures and supports the product?  Quite often this happens:

“I bought a COMPANY ABC product.  It had the COMPANY ABC name on it.  I called COMPANY ABC only to find out that it is not a COMPANY ABC product and they don’t service it.  What the &*%@ are they trying to pull?

It’s not uncommon for companies to license their coveted brand name to other organizations.  Continue reading

Is your Business Intelligence delivering ‘the goods’ like this?

In order to remain competitive in the marketplace, many manufacturers have responded to shrinking margins by buying cheaper, lower-quality parts. Are you doing the same? The current economic environment has compelled many consumers to repair failed products rather than replace them. This combination has placed a growing burden on call centers and customer service departments responsible for the balance between customer demands and corporate profitability.

While serving the product support space  during this time of conflicting paradigms, Customer Relationship Metrics has advised many clients about the best way to balance cost containment while preserving and enhancing their customer relationships. This includes everything from business rules to their back-office processes to the ways in which they communicate with customers, and relationship management strategies.  Customers have reacted positively to these changes, as can be seen from the call center benchmarking figures below.

In looking at the data in these call center benchmarking charts, unnecessary repeat calls have been decreased all the while protecting the opinion about the agents’ service delivery. Protection of the brand cannot be more important and has demonstrably improved. Most notably, this increased focus on the customer satisfaction and the use of customer intelligence analytics services has also resulted in greater profitability and market-share for our business partners – a perfect win-win situation in an environment where this would not be expected!

 

While this information is about the product support space the same type of insight and engineering is possible in various other industries that are concerned with retaining and building their customer base and preserving bottom-line performance.

For the past 20 years we have seen the demand for business intelligence grow and decline at times. One thing we do know is that the ones that consistently focus on it have been more successful in the long-term. It’s all about your values and business culture and integrity. You can say your are customer-focused, but do your actions speak louder than your words. You can’t hide the truth from your customers or your competition.

Learning more about business intelligence and customer experience Big Data mistakes to avoid is available through a free ebook that includes a complimentary self-assessment via our resources library. There is no need to register, you can download your copy anytime by clicking on the button.

7 Call Center Survey Rules to Live by

How do we ensure that customer experience results are a profitable business process in the call center and elsewhere in the organization? To increase the value of the initiative, be certain that the research is done the right way, and not only done for the sake of surveying customers.  Note that customer feedback results will be used by colleagues regardless of the number of caveats listed in the footnotes, so be diligent in providing valid and credible customer intelligence from your contact center.  The consequences of a poor measurement program and inaccurate reporting can have profound and far-reaching effects on your credibility in the organization.

Put another way, are you guilty of survey malpractice by giving your company faulty information based on inadequate research methods and interpretations?

Malpractice is a harsh word — it directly implies professional malfeasance through negligence, ignorance or intent.  Doctors and other professionals carry insurance for malpractice in the event that a patient or client perceives a lack of professional competence.  For contact center professionals and other managers, there is no malpractice insurance to fall back on for acts of professional malfeasance, whether they’re intentional or not.  Of course, it is much more likely that one would be fired than sued for bad acts, but that offers little comfort.

Never put yourself in a position where your competence can be called into question.  That’s why so many call center managers are “skating on thin ice” when it comes to their customer satisfaction measurements: there are demonstrable failings with many of the typical practices used by call center managers.  By definition, an ineffective measurement program generates errors from negligence, ignorance and/or intentional wrongdoing. You have a fiduciary responsibility to your company — and recommendations made based on erroneous customer data do, indeed, meet the definition of malpractice.

Measurement programs must meet certain scientific criteria to be statistically valid with an acceptable confidence level and level of precision or tolerated error.  Without these considerations, you are guilty of survey malpractice.  Defending your program with statements like, “it has always been done this way” or “we were told to do a survey” is not sufficient.  Research guidelines adhered to in academia apply to the business world, as well.  A deficient survey yields inaccurate data and results in invalid conclusions no matter who conducts it.  Unnecessary pain and expense are the natural outgrowths of such errors of judgment.

To maximize the return on investment (ROI) for the EQM customer measurement program, and to ensure that the program has credibility, install the science before collecting the data.  Make sure that the initial program setup is comprehensive.  If there is no research expert on staff, then hire this out to a well-credentialed expert.  The alternative is to train someone in the science around creating and interpreting the gap variable from a delayed measurement.  Or better still; engage a qualified expert to design a program to measure customer satisfaction immediately after the contact center interaction.

Before assuming that survey malpractice does not or will not apply to your program, consider the following tell-tale signs of errors and biases, as they are critical to a good program.

1.  Measuring too many things. Your survey of a five-minute call center service experience takes the customer 15 minutes to complete and includes 40 questions.  While everyone in your organization has a need for customer intelligence, you should not be fielding only one survey to get all of the answers.

Should the call center be measuring satisfaction with the in-home repair service, the accounting and invoicing process, the latest marketing campaign, or the distribution network? Certainly input on these processes is necessary, but don’t try to get it all on a single survey.

2.  Not measuring enough things. An overall satisfaction question and a question about agent courtesy do not make a valid survey.  Without a robust set of measurement constructs, answers to questions will not be found.  Three or four questions will not facilitate a change in a management process; nor will they enable effective agent coaching or be considered a valid measure to include in an incentive or performance plan.

3.  Measuring questions with an unreliable scale. In school, everyone agreed on what tests scores meant: 95 was an A, 85 was a B, and 75 was a C.  Everything in between has its own mark associated with it, as well.  Yet, when it comes to service measurement, we tend to give customers limited responses.  What do the categories excellent, good, fair and poor really mean? Offering limited response options does not permit robust analysis, and statistical analysis is often applied incorrectly.  In addition, using a categorical scale or a scale that is too small (like many typical 5-point survey questions) is not adequate for the evaluation of service delivery.

4.  Measuring the wrong things or the right things wrong. Surveys should not be designed to tell you what you want to hear, but rather what you need to hear.  Constructs that are measured should have a purpose in the overall measurement plan.  Each item should have a definitive plan for use within the evaluation process.  The right things to measure will focus on several overall company measures that affect your center (or your center’s value statement to the organization), the agents and issue/problem resolution.

5.  Asking for an evaluation after memory has degraded. When we think about time, 24 to 48 hours doesn’t seem that long.  But when you’re measuring customer satisfaction with your service, it’s the difference between an accurate evaluation and a flawed one.  Do you remember exactly how you felt after you called your telephone company about an issue? Could you accurately rate that particular experience 48 hours later, after other calls to the same company or other companies have been made? That’s what you’re asking your customers to do when you delay measurement.  It opens the door to inaccurate reporting and compromised decision-making, and is also an unfair evaluation of your agents.

Conducting follow-up phone calls to gather feedback about the center’s performance is a common pitfall.  While the research methodology certainly should have its place in the company’s research portfolio, it’s less effective than using point-of-service, real-time customer evaluations.

Mail and phone surveys are useful for research projects that are not tactical in nature, but rather focused on the general relationship, product feature, additional options, color, etc.

6.  Wiggle room via correction factors. If you’re using correction factors to account for issues in the data or to placate agents or the management team, some aspect of the survey design is flawed.   A common adjustment is to collect 11 survey evaluations per agent and delete everyone’s lowest score.  However, with a valid measurement that includes numeric scores, as well as explanations for scores and a rigorous quality control process, adjustments in the final scores will not be necessary.   Making excuses for the results or allowing holes to be poked in the effort diminishes and undermines the effectiveness of the program, and highlights an opening for survey malpractice claims.

7.   Accuracy and credibility of service providers and product vendors. As with any technology or service, the user assumes responsibility for applying the correct tool, or applying the tool correctly.

There are plenty of home-grown or vendor-supplied tools to field a survey, but, again, if you do not apply the functionality correctly, you will be responsible for the error.  Keep in mind that some service providers are only interested in selling you something that fits into their cookie-cutter approach, and it will not be customized to your specific requirements.

~ Dr. Jodie Monger, President

This post is part of the book, “Survey Pain Relief.”  Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions.  Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs.

“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice.  These programs are all too often accepted as valid by the unskilled and unknowing.  Inside is your chance to gain knowledge and not be a victim of being lead by the blind.  For more information http://www.surveypainrelief.com/

“Survey” Photo Credit: The University of York www.york.ac.uk/…/training/gtu/staff/cros.htm

Experts Read Here!
"CRM Metrics blog has lots of great content...I had all but given up on reading blogs but found this one to be full of insights and fresh ideas/perspectives."
Joe Outlaw, Principal Contact Center Analyst, Frost & Sullivan
Join Joe and Others

Join the Crowd

Resource Library

Watch Latest Videos