When trying to determine the best scale to use for customer satisfaction surveys, your decision should address these three important areas:
- What is the data collection methodology?
- How will the survey results be used?
- What is the best approach for analyzing survey results to insure value, accuracy, and accountability?
What is the best survey method for collecting the customer experience measurement?
Your customers must be able to easily understand, to remember, and to correctly apply the scale to their experience. Automated post-call survey programs also need to have the element of speed for the customer. This is why you need a scale that is anchored with a high and a low end rather than identifying a category for the scale points.
Categorical scales must be repeated multiple times to insure the correct application by the respondent, thereby limiting the effectiveness of the approach in the post-call IVR survey methodology where the goal is to quickly collect responses to as many research variables as will be acceptable.
How will the survey results be used?
Most often, customer satisfaction research in the contact industry is intended to be one of the metrics in performance management programs. When teams or agents are held accountable for customer evaluations, it’s natural for the focus to be on what’s “fair”.
With a need to achieve a particular rating level, the scale used on the evaluation is of critical importance (think about the variability in the response options). The fewer scale points that can be used for the evaluation equates to less variability in the data.
Most research projects in call centers use a 5-point scale and often times it’s normalized using a transformation table for comparative purposes. The technique of normalization has no impact on the relationship of the data points to one another, meaning that the results are not affected.
But, even when the narrow scale is transposed to a larger scale, the issue of the response choices and the lack of dispersion across the scale is not eliminated. In the example below, this limitation related to precision and “fairness” is clear.
The wider the endpoints, as seen in the scale conversion chart, the benefit of more variability in the ratings that directly affects the precision of the results is clear. Research has shown that responses for feelings of satisfaction differ more when offered more points on the scale (Bendig, 1954; Garner 1960).
For a 5-point scale, little variation between each scale point is available and on the 100-point scale is a difference of 25-points from one rating to the next. Surveys with fewer response options are “forcing” respondents into a category that causes information loss and renders the results to be less reliable than those with more variability (Van Bennekom, 2002).
Research also highlights a cognitive difference between a rating of 3 and a rating of 4 that is smaller than between a rating of 4 and a rating of 5 on the 5–point scale (Van Bennekom, 2002).
Here’s the important point for you: applying this logic to the “fairness” test by the agents, one can see the difference in securing a 4 versus a 5 as compared to a 3 versus a 4. Considering that only the 5 ratings will be counted as the desired service level (discussed below), the goal is actually more difficult to achieve with a 5-point scale. It does not take them long to figure this out and to then discount the results essentially undermining your entire program.
The issue of variability must also be considered along with reliability. Surveys with more response alternatives are more reliable than those with fewer responses (Scherpenzeel, 2002; Alwin and Krosnick, 1991). Surveys that use a 7-, 9- or 10-point scale are most reliable based on several research studies in the academic community (Andrews and Withey, 1980; Andrews, 1984; Alwin and Krosnick, 1991, Bass, Cascio & O’Connor, 1974 and Rodgers, Andrews and Herzog,1992).
With more scale points, consumers can make better evaluations of their experience and give a clearer indication of true feelings about the interaction. Considering the 9-point scale, the difference between each scale point is 12.5 (rounded to 13 in the chart) permitting the respondents to provide a more granular rating of the service experience.
What is the best approach for analyzing survey results?
The scale that you use has a tremendous impact on the analysis that can be done and what can be done with the results. Setting a performance goal of 87 on the 100-point converted scale defines a successful service interaction as per research of satisfaction and its relationship to customer loyalty. The zone of affection is achieved at 4.3 on the 5-point scale (or 87 on the converted scale). (See: Putting the Service-Profit Chain to Work).
This is important to understand. By this definition, only a score of 5 on a 5-point scale is considered to be a successful service experience. On a 7-point scale, only score of 7 would be delighted, but on the 9- and 10-point scale, both scores of 9 and 10 can be considered in the customer delight category (and therefore successfully meeting the goal).
If you are an agent being held accountable to customer experience ratings, knowing that it’s difficult to get a 5, do you think this is fair? The scale that you is one of the most important decisions to make about your program.
Additionally, a scale that allows more dispersion of the ratings also permits the trends in satisfaction (or dissatisfaction) to be more definable. The goal with a post-call IVR survey program is to identify improvement areas using analysis to quantify items that have an impact on the overall satisfaction rating and a scale with endpoints of high versus low is conducive to this analysis (Van Bennekom, 2002).
With fewer response choices and less variability/dispersion of the scores, the inherent clustering of the ratings reduces the ability to identify opportunities. This will cause issues when reporting the results as it may cause a question of reliability on your results.
Use this post-call survey scale
For post-call survey programs, the use of a 9-point scale is the most effective. The customers can easily use the rating schema. The agent performance can be fairly assessed with more choices for ratings and the definition of success being a Top Two Box (8 and 9) score as compared to a Top Box (5) score is more motivating. The management team can more easily leverage the analytic techniques appropriate for a scale with high/low anchors compared to analysis appropriate for categorical scales.
What do you do now?
Well if this formation above is totally clear for you to move forward, then you are all set. If the information above has caused confusion or does not give you what you need then you should register for a complimentary consultation. Because your success requires you to know what all of this means or to have a resource that does.
Alwin, D.F. and Krosnick, J.A. (1991). The Reliability of Survey Attitude Measurement: The Influence of Question and Respondent Attributes. Sociological Methods and Research, 20: 139 – 181.
Andrews, F.M. (1984). Construct Validity and Error Components of Survey Measures: A Structural Modeling Approach. Public Opinion Quarterly, 48 (2): 409-442.
Andrews, F.M. and Withey, S.B. (1980). Social Indicators of Well-Being: Americans’ Perceptions of Life Quality. Annals of the American Academy of Political and Social Science, 451: 191-192.
Bass, B.M., Cascio, W.F., & O’Connor, E.J. (1974). Magnitude estimations of expressions of frequency and amount. Journal of Applied Psychology, 59, 313-320.
Bendig, A.W. (1954). Transmitted information and the length of rating scales. Journal of Experimental Psychology, 47, 303-308.
Garner, W.R. (1960) Rating Scales: Discriminability and information transmission. Psychological Review, 67, 343-352.
Rodgers W.L., Andrews, F.M. and Herzog, A.R. (1992). Quality of Survey Measures: A Structural Modeling Approach. Journal of Official Statistics, 8 (3): 251-275.
Scherpenzeel, A. (2002). Why use 11-point Scales? http://www.swisspanel.ch/file/doc/faq/11pointscales.pdf
Van Bennekom, F.C. (2002). Customer Surveying: A Guidebook for Service Managers. Bolton, MA, Customer Service Press.
- Where Are You on The Spectrum of Agent Performance - July 27, 2017
- How many things should be measured on my Quality Monitoring Form? - May 17, 2017
- Best Practices for your Quality Monitoring Form - May 12, 2017
- What is the best scale for customer satisfaction surveys? - May 8, 2017
- How to take action with Call Center Analytics - May 1, 2017
- How many calls should agents handle in an hour? - April 19, 2017
- You are Doing First Call Resolution Wrong - March 31, 2017
- For People on the Verge of Tripping on the self-service Line - December 6, 2016
- Justin Robbins CCDemo interview takes me back to Kindergarten - November 4, 2016
- How many chat sessions can agents handle? - September 9, 2016