Learn what Customer Effort Score (CES) is, how to measure it, and how to reduce friction in your customer experience. Includes formulas and benchmarks.
TidySupport Team
Published on April 11, 2026
Customer satisfaction surveys tell you whether people are happy. Customer Effort Score tells you something more useful: whether you are making things easy.
This guide explains what Customer Effort Score (CES) is, why it is one of the best predictors of customer loyalty, how to measure it, and practical ways to lower the effort your customers have to put in.
Customer Effort Score (CES) is a metric that measures how much effort a customer has to put in to accomplish something — resolve a support issue, complete a purchase, find information, or use a product feature.
The concept comes from a landmark 2010 study published in the Harvard Business Review by Matthew Dixon, Karen Freeman, and Nick Toman. Their research, which surveyed over 75,000 customers, found a surprising conclusion: customer loyalty has far more to do with reducing effort than with delighting customers. In other words, the best thing you can do for retention is not to go above and beyond — it is to make things effortless.
CES is typically measured with a single question:
"How easy was it to [accomplish X]?"
Customers respond on a scale — usually 1 to 5 or 1 to 7, where higher numbers mean easier. Some companies use a agree/disagree format: "The company made it easy for me to handle my issue" with a Likert scale from Strongly Disagree to Strongly Agree.
The beauty of CES is its specificity. Unlike NPS (which asks about overall likelihood to recommend) or CSAT (which asks about general satisfaction), CES is tied to a concrete interaction. That makes it highly actionable — you know exactly which touchpoint caused friction.
The original HBR research found that 94% of customers who reported low-effort experiences said they would repurchase, compared to only 4% of those who reported high effort. Satisfaction is fleeting, but effort leaves a lasting impression. People remember when something was hard.
You might think your support process is smooth, but customers may be bouncing between channels, repeating their information, or struggling to find the right contact method. CES surfaces these hidden friction points.
Because CES is tied to specific interactions, the data tells you exactly where to invest. If your checkout CES is low, you know to simplify the purchase flow. If your support CES is low, you know your resolution process needs work.
CES is not a replacement for other metrics — it fills a gap. NPS tells you about overall brand perception. CSAT tells you about satisfaction with a specific interaction. CES tells you about the effort involved. Together, these three metrics give you a complete picture.
The two most common scales are:
7-point scale (recommended) 1 = Very Difficult, 7 = Very Easy
This gives you more granularity and produces a clearer distribution of responses. It is the scale used in the original research and the one most benchmarking data is based on.
5-point scale 1 = Very Difficult, 5 = Very Easy
Simpler and slightly higher response rates, but less nuance in the data.
Keep it simple and specific to the interaction:
Avoid vague phrasing like "How was your experience?" — that is CSAT territory.
There are two common ways to report CES:
Average score Add up all responses and divide by the number of responses. If you use a 7-point scale, your CES will be a number between 1 and 7.
CES = Sum of all scores / Number of responses
Percentage of "easy" responses Count the number of responses at 5 or above (on a 7-point scale) and divide by total responses. This gives you a clean percentage that is easy to communicate internally.
% Easy = (Responses of 5, 6, or 7 / Total responses) x 100
Send the CES survey immediately after the interaction you want to measure:
CES surveys are short (one question, sometimes with an optional follow-up), so response rates are typically higher than longer surveys. Expect 20-40% response rates for post-support surveys and 10-20% for post-purchase surveys.
Understanding what makes interactions feel effortful helps you prioritize improvements.
Forcing customers to switch channels — "Please call us instead of emailing" or "You need to submit a form on our website" — is one of the biggest effort drivers. Every channel switch resets the customer's context and patience.
Having to re-explain the problem to a new agent is a top complaint in every customer service survey ever conducted. It signals that your systems are not connected and that your agents do not have access to conversation history.
Waiting is effort. Even if the eventual resolution is good, a long wait increases perceived effort. First response time is directly correlated with CES scores.
Complex return policies, multi-step forms, unclear instructions — all of these add effort. The more steps between "I have a problem" and "My problem is solved," the higher the effort.
Being transferred between agents or departments is inherently high-effort. Each transfer requires the customer to re-explain their issue and introduces uncertainty about whether the next person can actually help.
When customers try to solve problems themselves (through your knowledge base or FAQ) and cannot find answers, they end up contacting support already frustrated. The failed self-service attempt adds effort on top of the support interaction.
First contact resolution (FCR) is the single biggest lever for reducing effort. Give your agents the authority and information they need to solve problems without escalation. This means access to account details, billing tools, and decision-making authority for common issues.
If a customer emails you, resolve it over email. Do not force them to call. If they start a chat, finish it in chat. Channel switching should be the exception, never the rule.
When a customer is transferred or follows up on a previous conversation, the new agent should have the full history available immediately. Tools like TidySupport keep the entire conversation thread — including internal notes — in one place so agents never have to ask the customer to repeat themselves.
Audit your most common customer journeys and remove unnecessary steps. Can you reduce a five-step return process to two steps? Can you pre-fill forms with information you already have? Every step you remove reduces effort.
A comprehensive, well-organized knowledge base reduces effort by letting customers solve problems without contacting you at all. But it only works if the articles are easy to find, clearly written, and kept up to date.
If you know about a bug or outage, reach out to affected customers before they contact you. Proactive communication eliminates the effort of the customer having to report the problem and wait for a response.
When a customer gives you a low CES score, follow up personally. Ask what made the experience difficult. This not only gives you actionable feedback — it also shows the customer you care about improving.
Agents should be empowered to see a conversation through to completion rather than handing it off. This might mean giving them access to tools in other departments or training them to handle a broader range of issues.
Several tools help you measure and act on Customer Effort Score:
For the measurement side, any survey tool that supports a scaled question will work. The real work is on the operational side — reducing the friction that drives high-effort scores.
Review individual CES responses daily (especially low scores that need follow-up) and aggregate trends weekly or monthly. Look for patterns by channel, issue type, and agent to identify systemic problems.
Industry benchmarks can be useful as a starting point, but your most valuable comparison is against your own historical data. Focus on improving your CES over time rather than hitting a specific number.
This can happen when the process is easy but the outcome is not what the customer wanted. For example, a customer might find it easy to contact support but be dissatisfied because their refund request was denied. In this case, CES and CSAT are measuring different things, and both are valid.
No. CES and NPS measure different things. CES is a transactional metric tied to specific interactions. NPS is a relationship metric that reflects overall brand perception. Use both, along with CSAT, for a complete view.
Aim for at least 100 responses per touchpoint before drawing conclusions. With fewer responses, individual outliers can skew your average significantly. If your volume is low, consider aggregating data over a longer time period.
On a 7-point scale, a CES of 5 or above is generally considered good. On a 5-point scale, aim for 4 or above. The more important number is the percentage of customers who rate their experience as 'easy' (5 or higher on a 7-point scale) — top-performing teams see 80%+ in this range.
Send it immediately after a specific interaction — right after a support ticket is resolved, after a purchase is completed, or after a customer uses a self-service resource. CES is most useful as a transactional metric tied to a specific touchpoint, not as a general relationship metric.
CSAT measures how satisfied a customer is with an interaction. CES measures how easy the interaction was. A customer can be satisfied with the outcome but still feel the process was too difficult. CES tends to be a better predictor of future loyalty than CSAT.
Absolutely. CES is a strong metric for knowledge bases, onboarding flows, and checkout processes. If customers struggle to find answers on their own, your CES scores will reflect that and point you toward areas that need improvement.