Make love, not NPS

Great product teams know why people love or hate their products, not if people would recommend them to friends.

NPS – net promoter score – is a metric that many companies use to assess product satisfaction. I’ll assume you know how it works, but if not, see this footnote for a quick primer.

Give me a few minutes of your time to prove to you why NPS sucks and how you can measure love instead.

Why NPS sucks

It’s trademarked
NPS isn’t some commonly accepted measure like temperature or length. It’s a number invented by an agency and it’s been trademarked. If I was an agency and I was looking to drum up work, I’d invent a loaded metric that always shows your product needs improvement, lock it up with intellectual property laws to prevent others from using it, then I’d sell you a solution that optimizes just that metric. Let’s dig into how loaded it is.

Ignoring 28% of the results
I swiped a ton of NPS data from indexnps.com – a site that has lots of NPS results for a wide swath of brands. Recall that responses of 7 and 8 are ignored in NPS results, yet 28% of NPS responses fall in that range. If I was an agency trying to get a client, throwing out a quarter of positive results without any justification is a great way to stack the deck in my favor that you need my help.

Biased results
People often answer survey questions based on what they think you want to hear rather than give an honest response (“experimenter effect“). In the case of NPS, they’ll answer that your product is great even if it isn’t. A full 67% of the indexnps scores were 9-10. Were those people really excited to recommend those products or were they trying to satisfy the pollster?

What are they recommending?
The people answering your NPS question could be recommending any part of your product. For example, if your website has an engaging user experience and separately a monetary incentive for participation, people could be recommending your monetary incentives (“Hey, get free money here”) over your user experience (“Hey, this site is really engaging”). This is an essential detail that gets paved over with the overly simple NPS question.

Hypothetical vs. concrete
Since NPS is framed hypothetically (“Would you recommend…?”), you’re not learning if people actually recommended it (“Did you recommend…?”), to whom they recommended it, how they recommended it, and whether or not that recommendation actually had an effect. NPS leads to more questions than answers.

It’s not an outcome
Companies treat NPS as a key outcome — ex. “Our goal this quarter is to increase NPS 20%” — without understanding how or if that translates to other business results. NPS isn’t an outcome, high scores doesn’t ensure more business, and it doesn’t tell you if customers will stay loyal to your product.

Measuring Love

My hypothesis is that love and hate are the best indicators of your products’ future success, so you need to measure them.

Humans make decisions based on emotions (then justify their emotional decisions with reason). Your goal is to make a product that triggers positive emotional outcomes so that people become loyal to your product without realizing it. Negative emotional responses and decreasing positive emotional outcomes are signs that you’re going to lose customers.

Ask these questions instead to learn whether or not people love your product.

“How do you feel about the product?”
Have the person choose from a series of smileys with a range of emotions – elated, happy, neutral, angry, crying, sick, love, etc. Keep track of the positive versus negative emotions and how they change over time.

“What do you love most about the product?”
Sure, it’s a naive question, but it bluntly gets at the issue. These are the items you should feature in your marketing materials and sales pitches to get future customers’ attention. Ask it open-ended (vs. multiple choice) for best results.

“What do you hate the most?”
These are the problems that you need to fix or you’ll lose customers. Note that different types of users will have different problems, so you need to ask a wide swath of users. For example, if you’re making a business app, the everyday user will hate some parts (ex. “this page sucks”) versus the ones that the buyer hates (ex. “this costs too much”).

“Have you recommended…”
Ask this yes/no question – “Have you recommended [my product] to a [friend/colleague/etc] in [the last month/year/ever]?” Try to learn about the context of the recommendation and why that person recommended it. Concrete questions are way better than hypothetical ones (“Would you recommend…”).

“How did you recommend…”
Ask how someone explained your product to another potential user of your product. Your everyday users can explain your product to other potential users better than you can. You need to understand the language of your users, then use those words when talking to other prospective customers.

Measuring love is hard and necessary

The promise of NPS is that it’s a quick way to assess how well your product is doing — that recommendations are a proxy for product success. Taking shortcuts with something as complex as social interactions or emotional reactions can lead to worse results than doing nothing at all.

This gets to the core of the matter — a great product team should always seek to understand why people love and hate their products, and they should be be able to tell you why without hesitation. “Hey PM, what do people love the most about the product?” If your product team can’t answer that question immediately with concrete examples, they’re not doing their job.

So do the right thing with your NPS scores — get rid of them — and instead learn how people feel about your product. I’d take one person who loves my product over a hundred who’d recommend it to a friend.

tl;dr on NPS: NPS is usually asked as a question in a survey like: “From 0 (not at all) to 10 (definitely), how likely are you to recommend [my product] to a [friend/colleague/etc]?” The NPS score itself is (% of people who answered 9 and 10 – % of people who answered 0 to 6).

So if your nps is 70% (that is, there’s a 70% difference between people who answered 9-10 versus 0-6), the general interpretation is that people like your product. A score of -20% (that is, the percent of people who answered 0-6 was 20% more than people who answered 9-10) probably means that people don’t like your product.