Commentary

How Do I Know It’s Working? Measuring Retail Personalization

Commentary

How Do I Know It’s Working? Measuring Retail Personalization



One of the many advantages of personalized retail offers is that their impact can be measured much more precisely than traditional tactics. Splashing a hot deal on the front page of the flyer is a great way to drive traffic, but it doesn’t lend itself very well to control groups or A/B testing. But while measuring the product lift driven by personalized offers is fairly standard, many retailers we talk to do not yet take advantage of the full range of methods to measure personalized marketing. In this post we review the options available and explore how retailers can use them to understand and grow their business.

How did it affect product sales?

This is the most obvious way to measure personalized offers, and the one that most retailers already have in place. I sent the customer an offer on ketchup. Did he buy more ketchup? A classic control group methodology is the best approach. Comparing purchase rates between people who received the offer and those who didn’t can give you a fairly precise idea of the sales driven by the offer. But of course that is not the full story.

In fact, focusing only on product lifts misses the true value of personalization. Our studies have shown that as much as 85% of the effect of personalization comes from driving additional trips to the store, where only 15% comes from growing customer basket size.

With this in mind, there are three other ways that we suggest measuring personalized campaigns. Each can give a helpful – even critical – perspective on the effectiveness of your personalization efforts.

How did it affect customer spend?

The second question we want to answer is more complex, but arguably more important. This method is meant to determine not whether customers bought more of the products they got discounts on, but whether they spent more overall. This method accounts for the fact that when customers visit a store, they rarely purchase only items on which they had personalized offers.

In fact, the primary benefit of personalized offers in retail seems to be not that it helps you sell more of the products on offer, but that it helps get the customer to the store in the first place. Our studies show that as more than 80% of the benefit of personalization programs comes from driving trips, while less than 20% comes from growing the basket.

The methodology here is similar to the control group methodology for measuring product lift, but with an important distinction. In this case, retailers generally set aside a portion of the customer base and provide them a different experience. That might mean no communication at all, or it might mean they get a generic set of offers which is not hand-picked for them. Whatever the mechanism, the core idea is still to compare the purchase behaviour of customers who are receiving personalized offers to those who aren’t.

How relevant was it?

This tries to quantify how relevant a personalized offer campaign is on average through a simple metric we call “average relevant offers”. It works like this. Let’s say you are doing a campaign where you are sending 10 personalized offers to every customer in your loyalty program. You would then calculate how many offers in each customer’s set of 10 are items they have actually purchased in the past three months. The average score across all customers gives you a quick read on how on target a campaign is. The image below shows a sample output for the Average Relevant Offers methodology.

Number of relevant offers per customer

A campaign where customers have recently purchased 5 or 6 offers out of 10 will be perceived as highly relevant. A campaign where the average is closer to 1 or 2 offers may not be seen as truly personalized. By tracking this relevance score over time, and comparing it with campaign sales lift and ROI results, you can gain valuable insight to help you find the sweet spot for personalization. (You probably will not often reach higher scores where 9 or 10 out of 10 offers are relevant, nor would you want to. The ideal is to lead with offers on previously purchased items to drive store visits, then mix in some related items that haven’t been purchased before to grow the basket and win new categories.)

To take this methodology to the next level, consider using different replenishment cycles for different categories. A product that is commonly bought weekly – such as milk or bananas – might be considered relevant only if purchased in the last 4 weeks. But for items such as laundry detergent or household cleaners, you might need to look at the last 3-6 months to determine relevance.

How personalized did it feel?

The methods above are about hard numbers. Did we sell more? How much did customers spend? How many offers were for previously purchased items? But there is another side to customer loyalty that focuses more on the “soft side”. This approach seeks to answer the question, How personalized did the offers feel to the customer? Note that there is a difference between an offer that is objectively relevant and one that feels personalized. Bananas might be your most frequently bought item. But an offer on bananas is unlikely to make you feel that someone who knows you well handpicked the offer just for you. After all, everyone buys bananas. One simple way to do achieve this is to prioritize products that are highly relevant for an individual, but are bought by a lower percentage of the total population. Counting how many products in each customer’s set are bought by fewer than, say, 1% of the population will give you a quick gauge of how many offers will be perceived as highly personalized.

This is not an exact science. But there is a huge value in having a panel of people, often internal, who get to preview their own personalized offers before a campaign goes live. This is a great way to get qualitative feedback and spot any issues before a campaign goes live.