By Steven Buck

If you want to gauge the likelihood that your employees will bring their best selves to work and do their best work, you might be tempted to turn to the Employee Net Promoter Score (eNPS). 

But doing that might cause more problems than you’re aiming to solve. The eNPS came from a framework originally meant for organizations to survey customers, not employees. Employees, as savvy organizations know, are a much different population from customers—with different needs, purposes, and mindsets.

What is eNPS?

Inspired by the widely-used measure of customer loyalty known as the Net Promoter Score (NPS), the eNPS is calculated from a single question, usually some form of, “How likely are you to recommend working at your organization?”

Employees select an answer from an 11-point scale, 0 to 10. The total score is then computed by subtracting the “Detractors” (employees who select from 0 to 6) from the “Promoters” (those who choose 9 or 10) while ignoring the so-called “Passives” (7 or 8). The result is then displayed as a score between -100 and 100, with 100 being the most positive. 

At first glance, it might seem logical to consider eNPS as a meaningful way to understand employee engagement. Indeed, many organizations use eNPS as a leading indicator of employee retention, performance, and employer brand. However, a closer look at eNPS and the way it is calculated suggests that first impressions may not be as reliable as they initially feel.

Simply put, eNPS is not the best measurement of employee engagement. To understand why, we need to deconstruct the score that served as its model: the NPS. 

eNPS: A case of misguided inspiration

The NPS score was first mentioned in a 2003 Harvard Business Review article written by Bain & Co. consultant Frederick Reichheld. He shared how Enterprise Rent-A-Car had devised a simple and seemingly flawless way to measure customer loyalty. What grabbed Reichheld’s attention was the ability to use a single survey question as a predictor of growth.

Reichheld was justifiably excited by this prospect since traditional customer surveys were long and complicated—and yielded low response rates. Even worse, results from those old surveys did not directly indicate what organizations could do to improve.

In the 16 years since Reichheld first advocated for the NPS, it has become a widely adopted and influential metric. CEOs and boards of directors across industries tout their NPS score in earnings calls and IPO announcements and use it to justify—or withhold—executive bonuses. Reichheld himself has voiced disapproval for how the NPS has been co-opted to determine bonuses and indicate organizational and stock performance. As he said, he “had no idea people would mess with the score to bend it, to make it serve their selfish objectives.”

With this history in mind, it’s little wonder organizational leaders let their familiarity with and enthusiasm for NPS bubble over into employee engagement.

Questioning eNPS for employee engagement

First, let’s break down the NPS:

  • “Net” represents the difference between positive and negative values in a calculation.
  • “Promoter” is someone who talks positively about your organization to others.
  • “Score” is the calculation that indicates how well your organization is performing.

Now let’s dive into the questionable calculation behind the score. Subtracting one survey metric from another (Detractors from Promoters) increases the margin of error. As a result, you need a large enough sample size to arrive at a meaningful score. In smaller groups, scores can shift drastically based on just one person’s response.

Beyond that, the 11-point scale is reduced to three groups for reporting purposes: Detractors (0 to 6), Passives (7 or 8), and Promoters (9 or 10). In essence, the eNPS is a cut-off score (e.g., “9 or above”), which means any noise between 8 and 9 gets amplified, and any activity between 7 and 8 is lost. As a result, this scale can lead to misleading interpretations of the scores. 

Another major issue with the eNPS calculation is that it can mask important details. An example illustrates one of the more salient points:

  • Team A and Team B each have 10 employees. 
  • On Team A, all 10 employees select a 1 (i.e. the lowest possible score).
  • On Team B, all 10 employees select a 6 (still a detractor on the eNPS scale).

The pitfall:

  • Both teams have the exact same eNPS score of -100.
  • However, in reality, Team B, with a 6 average, isn’t in terrible shape. 
  • but Team A, with an average of 1, is in serious trouble.

In this scenario, eNPS totally misses the important difference between the two teams. 

Measuring customer attitude vs. measuring employee attitude

The NPS system was developed with the customer in mind—not the employee’s relationship with an employer. This is underscored by the breakdown of the 11-point scale, in which respondents selecting 0-6 are considered brand detractors, 7-8 are passives, and 9-10 brand promoters.

In fact, there is no evidence supporting the idea that employees fall into these categories when it comes to the likelihood of recommending their organization as a place to work. What’s more, “Likelihood to Recommend” is not the best single predictor of individual- or team-level employee engagement.

“How likely are you to recommend working for our organization?” is a global question. But engagement is highly shaped by local factors, including an employee’s team and manager. People might recommend their organization because they feel that it’s a great place to work overall. However, they might not be happy with their own team, manager, how they are enabled in their job, or opportunities for career development. All of those factors are highly correlated to personal employee engagement. 

What’s more, the eNPS uses a large scale where the distinction isn’t clear. Two teams could foster similar employee experiences but result in two very different eNPSs. This is because the eNPS question has an 11-point scale. If employees can’t differentiate among the scale options, their responses may not reflect their attitudes, and that will impact the quality of the results.

Instead, employee attitudes are better captured using a five-point scale because employees express both negative and positive attitudes about organizational factors. While consulting firms have traditionally used the “percent favorable” metric, average scores are a better metric for engagement. That’s because “percent favorable” can vary dramatically in smaller groups—even when group members express little difference in sentiment. Plus, “percent favorable” doesn’t capture changes in scores when sentiment goes from neutral to unfavorable.

Ignoring what matters most in employee engagement: action

Perhaps the biggest flaw of the eNPS formula is that it completely ignores employees considered Neutral (or Passive). As we well know, asking for employee feedback isn’t helpful unless managers and teams take action on it. In employee engagement action planning, much of the benefit, or lift, can come from improving the scores of those employees on the fence—i.e. the Neutral/Passive scores… the very population the eNPS metric ignores.

A better option than eNPS for measuring attraction, performance, and retention

According to our research, a single question—“How happy are you working at [Organization]?”—is the best predictor of employee engagement. Organizations want to predict employee engagement accurately because it helps drive business outcomes, including financial performance and customer satisfaction ratings. 

Moreover, this question, known as employee satisfaction or eSat, is related to individual performance ratings and is better at predicting attrition two quarters out than traditional measures that ask employees how long they plan to stay with an organization. Our research shows that employees answering unfavorably to the eSat question are five times more likely to quit in the following two quarters than those answering favorably. Employees with neutral eSat answers are two times more likely to quit. Another Glint study showed that unfavorable eSat respondents are 12 times more likely than favorable respondents to leave in the proceeding 12 months. 

The eSat question streamlines the survey process because it can predict over 80% of what an 11-item engagement index can predict. For all practical purposes, this single question is all you need to accurately and consistently gauge employee engagement in your organization. When you couple eSat with a small set of questions that have been shown to influence engagement in organizations, such as sense of purpose, opportunities for learning and growth, or role clarity, you can understand both how engaged your employees are and the drivers that can improve their engagement.  

Interested in learning more about how to effectively measure employee engagement? Check out our latest post here.