It’s time to rethink digital measurement for grocery brands
With the imminent demise of the third-party cookie looming ever larger, the topic of digital ad measurement is starting to garner increasing attention from marketers.
This could be a good or a bad thing. It’s a good thing if marketers take the opportunity to really explore what they are measuring, why they are measuring, and what it actually means. It’s a bad thing if marketers look for new ways of doing the same measurement.
At face value, ad measurement is easier and more precise than ever. While clicks remain prevalent, a gaggle of enticing new engagement metrics have come to the fore in recent years as ad and consumer technology have evolved at pace. We can now measure engagement, social interactions, and even measure store footfall. We have created a heady mix of key performance indicators, KPIs, that are used to justify the effectiveness of online marketing campaigns.
Add to this increasing sophistication in modelling and attribution, plus the ever-expanding C-Suite of measurement lenses – CTR, CPA, CPL, CPC, CPM etc., and it seems that fast-moving consumer goods (FMCG) marketers have a million and one ways to demonstrate campaign value. Even better, they have the levers to improve campaign value. Granular targeting, machine learning, AI, it’s all available at the touch of a button.
The numbers gap in measurement
With a gamut of quantifiable metrics and increasingly slick technology, we ought to be in a golden age of relevance and effectiveness in marketing. As consumers though, many of us can attest that the majority of ads we see online are – politely – sub-par.
Recent headlines tell an even more eye-opening story about the state of digital advertising. Let’s take Uber, who as a tech company you would hope were pretty savvy in their marketing measurement. Uber’s chosen KPI of app installs seems a sensible measure on paper, and their attribution model looked smart by paying when the user actually used the app. User sees ad, user installs and uses app, Uber pay ad network. Simple, but ineffective.
First, Uber realised – eventually and after external complaints – that they had been defrauded by ad partners. Then, and possibly worse, they realised that $100 million of their ad spend, two thirds of their total, was driving precisely nothing and was instead taking credit for actions that would have happened anyway. Uber even managed to lop off a further $20 million in ad spend later on, taking their grand total of money down the drain to $120 million dollars a year that their ad metrics presumably claimed was money well spent.
How could this happen? How could a company like Uber, with an advanced marketing programme, all the modern metrics and access to the full range of marketing platforms, targeting and best practice, be throwing away $100 million plus? And Uber isn’t an isolated case; how could the world’s second largest advertiser, P&G, also have discovered it was wasting $100 million dollars online annually?
The answer is simple and worrying – for many advertisers, digital measurement does not work. When even unicorns can’t magic up effective digital marketing, the industry needs to pay serious attention.
The customer impact of bad measurement
Wasted spend and fraud are headline grabbers, but marketers also need to think about the potential customer impact of bad measurement. If the measures are wrong, the actions that marketers take are also likely to be wrong. The power of precision targeting could be used to target precisely the wrong people. The learning algorithm which is trained to maximise clicks learns that the best tactic is to show the ad more times. Even worse, bad measures could directly contribute to bad customer experience.
The interruptive ad that pops up right in the middle of a YouTube video. The retargeting ad that is smart enough to know which shoes I looked at, but too dumb to stop showing them to me after I bought them. The wilfully vague headline that forces the user to click to find the whole, usually meaningless, piece of news. These small but persistent annoyances rely on rewarding tactics that drive clicks, engagement and the mass of measures that don’t measure anything.
Click and engagement-based metrics are particularly weak for FMCG brands because they don’t link to anything tangible. For FMCGs, the distance between a click or a like of an advert and a purchase which is overwhelmingly likely to happen in store is simply too great. Unsurprisingly, when dunnhumby measured the relationship between click through rate and incremental on and offline sales (true Return on Ad Spend), we found none whatsoever.
Better are metrics linked to verified outcomes – for FMCGs, that’s a confirmed product purchase after seeing the ad. But even when metrics are linked to concrete outcomes such as conversions, for FMCGs this is typically for online sales only. In mature markets and post-Covid, that might mean an FMCG marketer has sight of 15-20% of sales. In most markets they’ll be looking at under 5%. dunnhumby partners with FMCGs to show offline as well as online conversions via granular retailer data, which is a significant forward step for FMCGs.
That said, Uber’s harsh lesson shows that outcome-based metrics alone don’t guarantee good measurement.
From absolute failure to relative success
The gold standard in measurement is incrementality. Testing an activity against a verified, representative control group is the best way to prove or disprove marketing effectiveness. Incrementality tells marketers how many of their conversions or results would have happened without the activity. At dunnhumby we have always used incrementality wherever possible, and tools like Facebook’s conversion lift tests increasingly open this option to marketers across digital channels.
While incrementality is something to work towards, the reality for FMCG marketers especially in the brand space is that incremental testing of all activities isn’t feasible. Most FMCG marketers don’t have the luxury of vast sources of first party customer data and a team of experts to set up perfect test and control campaigns for every conceivable channel. So what can FMCG marketers do?
Uber and P&G uncovered their measurement issues by asking a very simple question: what if? What if we switched off this ad network? What if we switched off everything? What happens to my measurable outcome? Crucially, they stopped focusing on the absolute numbers and misleading ROAS of each campaign and started focusing on the relative impact of different scenarios.
They accepted that the metrics alone weren’t a reliable judge of performance, but that by varying the approach and – crucially – linking it to a verified outcome, they could build up a picture of what worked and what did not.
FMCG marketers can and should bring this mentality into their measurement planning. Every campaign should have a ‘what if’ type question to be tested. What if half the audience sees a brand creative and the other sees a trade creative. What if I upweight my frequency? What if I change my channel mix? Even the drastic questions are useful– what if I stopped advertising on this channel?
For targeted channels, setting such A/B tests is straightforward. For national or mass campaigns, a geographical split is a feasible option, perhaps with market share as the outcome. All else being equal, any significant differences in results will be down to the variant being tested.
By comparing two variations of these questions in one campaign, marketers can start to learn directionally what works and what doesn’t. They won’t uncover absolute ROAS; only an incremental approach or very robust econometric measurement can give a reliable answer there. But by having a holistic test and learn approach linked to verified outcomes, FMCG marketers can quickly build a bank of good and bad practise in their marketing which can be applied across the funnel.
If FMCG marketers adopt this mindset, if they focus more on working out what works and what doesn’t through testing, marketing effectiveness for FMCGs can be greatly and continually improved. If the focus remains on absolute numbers as measures of performance, we’ll carry on seeing stories of wasted ad spend, no matter how advanced the ad tech.