Businesses assess the attitudes of their customers using customer surveys. The purpose of these surveys, typically conducted annually, is to help companies maintain or improve the quality of the customer relationship. The quality of the customer relationship is typically indexed by a few key questions, each measuring something important about the health of the customer relationship. These "ultimate criteria", often housed in company dashboards and tracked over time, help guide executives' decisions on ways to improve the quality of the customer relationship. One unstated assumption we make as customer experience professionals is that customers' attitudes are amendable to change.
In this post, I examine the stability of three common customer metrics: customer sentiment, customer satisfaction and likelihood to recommend. This study will compare customers' attitudes over a 1.5-year period to understand the extent of the stability/instability of their attitudes over a non-trivial amount of time.
A B2B technology company conducts an annual customer relationship survey as part of their formal customer experience program. While their survey contained about 24 questions, we focused on three metrics:
- Customer Satisfaction: Customers were asked to provide a rating of their level of overall satisfaction (0 - Extremely dissatisfied to 10 - Extremely satisfied).
- Likelihood to Recommend: Customers were asked to provide a rating on the likelihood of recommending the company to friends/colleagues (0 - Not at all likely to 10 - Extremely likely).
- Customer Sentiment: Customers were asked, "Using one word, please describe COMPANY'S products/services." I employed machine learning to create a sentiment lexicon to scale each word along a sentiment continuum. The resulting metric is referred to as the Customer Sentiment Index (CSI); this measure can vary from 0 (negative sentiment) to 10 (positive sentiment). I've gathered evidence of the reliability and validity of this measure (also, see here, here and here), showing the usefulness of this measure for customer experience (CX) programs.
The time period between these two survey administrations was about six quarters; the first customer survey was administered in the Spring of 2014, and the second customer survey was administered in the Fall of 2015.
Descriptive statistics of and correlations among the study variables are presented in Figure 1. Average ratings did not change meaningfully over the two time periods for any of the three measures. The average sentiment rating was 7.10 for both time periods. A total of 68 respondents completed surveys from both survey periods. Consequently, the correlations in the bottom part of Figure 1 for CSI, Satisfaction and Recommend are based on samples sizes of 36, 68 and 67, respectively.
Stability of Customer Metrics
The correlation between CSI from time 1 and time 2 was .60 while the corresponding correlations between the satisfaction and recommendation questions were .42 and .27, respectively. This pattern of correlations shows that customer sentiment is more stable over time compared to recommending intentions. People who reported positive sentiment at time 1 tended to report positive sentiment at time 2, while people who reported negative sentiment at time 1 tended to report negative sentiment at time 2. This pattern was less apparent for the recommend question.
Predictability of Customer Sentiment
Using the survey data collected at time 2, I found that satisfaction ratings with the 12 CX areas (e.g., product quality, account management, tech support) were more highly related to recommendation intentions (average r = .53) than they were with the CSI (average r = .29). This finding suggests that improvements in the customer experience will have more of an impact on improving recommendation intentions than it will on improving customer sentiment. In a previous post, however, I found that CX questions were correlated with CSI in different B2B companies at a much higher level (r ~.47).
Summary and Conclusions
Generally, there is some stability in customers' attitudes over a 1.5-year time period. Customer sentiment appears to be more stable than customer satisfaction and likelihood to recommend. In the current study, customer sentiment toward a vendor accounted for 36% of the variance in customer sentiment toward that vendor a year and a half later; customers who possessed positive sentiment at time one tended to possess positive sentiment 1.5 years later. However, even though there is a degree of stability in customer sentiment over time, there is considerable change occurring over the 1.5-year time period.
Changes in the customer experience appear to have more of an impact on traditional measures of customers attitudes (i.e., satisfaction, likelihood to recommend) than customer sentiment. The difference between these customer metrics' correlations with CX areas, however, might only reflect differences in measurement method; that is, they are a methodological artifact. As I've demonstrated before, the correlation between CX questions and recommendation intentions are likely driven by common method bias; both metrics require customers to provide ratings on a 0 to 10 rating scale, artificially inflating the correlations between them. I suspect that the true correlation between CX satisfaction and real recommending behavior is a lot lower than what we found in the current study (.53).
The CSI could be a useful alternative to other customer metrics that share the same rating scale as CX questions. Unlike many of the current customer loyalty metrics (e.g., satisfaction, likelihood to recommend), the CSI doesn't require a rating scale and, therefore doesn't suffer from the problem of method bias. I'll be exploring the use of the "one word" question and the CSI in upcoming blog posts to demonstrate their usefulness in helping businesses optimize the their customer surveys.
Also, it should be noted that An earlier version of this article first appeared in CustomerThink.