Alex Steer

Advertising effectiveness, analytics and strategy / about

Facebook video metrics, and why platforms shouldn't mark their own homework

527 words

Originally posted on the Maxus blog

Facebook has revealed that for the last two years it has been overstating video completion rates, due to an error in the way it calculates views.

Because Facebook only counts as a 'view' any video consumption over three seconds, it has been applying the same logic to its video completion rate metric - so the metric tells us not how many people who started watching a video then finished it, as we would expect, but how many got past the first three seconds and then finished. It is estimate that their video conversion rates have been overstated by 60 to 80% for the last two years.

Facebook are now hurrying to amend the metric, which they are treating as a replacement, but which is in reality a bug fix.

The news is understandably shocking to advertisers and their agencies, many of whom have been investing heavily in video and using these metrics to monitor and justify spend.

But it is also sadly predictable - an inevitable consequence of the lack of auditability in the metrics produced by many media platforms, not just Facebook.

Facebook have not allowed independent tracking of video completion rates on their platform, meaning that the only way to get video completion data is from Facebook itself. They are not unique in this, and we see this 'metric monopoly' behaviour from many of the digital media platforms, usually citing reasons such as user experience or privacy. Rather than allow advertisers to conduct their own measurement, many platforms are now offering to provide advanced analytics to brands who buy with them, including digital attribution and cross-device tracking. The data and the algorithms that power this measurement remain firmly in the media owner's black box.

Today's news makes it clear how unacceptable an arrangement this is. At Maxus we talk about the importance of 'Open Video' - planning video investment across many channels and touchpoints, reflecting people's changing use of media and making the most of the vast and proliferating range of video types that exist today, from long-form how-tos and product demos to seconds-long bitesize experiences in the newsfeed. As video changes, it creates more opportunities for brands, far beyond the thirty-second spot.

But Open Video requires a commitment to open measurement. As advertisers and agencies we have to be able to gather a coherent, consistent picture of what people are seeing and how content is performing. We are investing significant effort in building the right measurement and technology stack to help clients plan, deliver, measure and optimise Open Video strategies, including advanced quality scoring, attribution and modelling that lets us see how exposure in one channel compares to another in terms of quality, completeness and effectiveness.

Media platforms create amazing new possibilities and are important partners to advertisers and agencies in innovation and delivery. But they should not be allowed to mark their own homework. Measurement and attribution should always be independent of media delivery, available to agencies and auditable by clients. Any other arrangement is a compromise - and, as we've seen this week, a risk.

# Alex Steer (24/09/2016)


YouTube vs TV: where should advertisers stand in the 'battle of the boxes'?

1167 words

Tom Dunn and I wrote this on Brand Republic this week. Reposting...

It’s been an extraordinary couple of weeks on planet video. The TV industry body, Thinkbox, and Google’s YouTube have been engaged in a full and frank exchange of views that, both are at pains to point out, is absolutely not a fight. The topic they are definitely-not-arguing about is a fundamental one: where advertisers should spend their video advertising budgets.

The totally-not-trouble began brewing back in October, with a punchy statement from Google’s UK & Ireland Managing Director, Eileen Naughton, making the case the advertisers should shift 24% of their TV budgets into YouTube, especially if they’re targeting 16-34 year olds.

Last week, Thinkbox came back swinging, calling the Google claim ‘ill-founded and irresponsible’. In the intervening months they have been analysing viewing and advertising data, to find that while YouTube made up 10.3% of 16-24 year-old’s video consumption (v.s TV’s 43.5%), it made up just 1.4% of their video advertising consumption (with TV coming in at a whopping 77.5%).

Within a few days, Google wheeled out their econometric big guns and shot back with an even bigger claim: making the case for advertiser that YouTube offers a 50% better return on investment than that of television, and that 5-25% of video budgets should be spent on YouTube.

Now, it’s definitely not a scrap, but it seems that marketers and agencies are stuck in the middle and in a Brexit kind of way, need to make up their minds where they stand. And worst of all, the kinds of spats that used to be conducted via general pronouncements about consumer trends and attitudes are now being tooled up with findings from data.

Or, should we say, “findings”. From “data”.

Thinkbox and YouTube have stood out in the industry over the years for their commitment to research and measurement. Yet, in the battle of the boxes it seems both have lost focus and the numbers used raise more questions than answers.

As the heads of effectiveness and futures at a media agency, we both spend a lot of our time trying to find the balance between what’s working today and what’s changing tomorrow. This conversation about the impact of video channels matters. Because of the scale of the change we are already seeing in media consumption, and the greater scale of changes to come. Is the leapfrogging of linear TV by online video channels among the under-25s a temporary behaviour or a deeper generational shift? Will the box in the living room lose its next generation of viewers permanently, or will it welcome them back with open arms as a large generation, now house-sharing (or overstaying their welcome with their parents) find themselves with living rooms (and remote controls) of their own.

Either way, the world in which video advertising lives is changing. This stuff matters to all of us who use video to tell stories, make connections and grow our brands. That’s why it’s good to see media owners and industry bodies taking it seriously – but also why the use of data as weaponry has left something to be desired.

In the blue corner, ThinkBox. We’re puzzled by their argument more than by their numbers. They seem to be saying the because more advertising is consumed on TV, clients should advertise on TV more. Yet this comes across as circular logic – saying we should put our ads on TV because that’s where the ads are. If there is a 4:1 ratio of content consumption between TV and YouTube, but a 98:1 ratio of advertising consumption, surely that implies that YouTube has a lot more headroom? It’s fair to say that as consumers we still accept a far higher payload of advertising per piece of content on TV than we do on YouTube, but that’s as much to do with the vastly different buying models, available formats, and modes of consumption than ability of the platforms to deliver exposure.

In the red corner, YouTube, with is headline-grabbing claim of 50% higher ROI. The rationale for this is a study done with Data2Decisions, an econometrics and analytics consultancy. This is a good sign that there will be some robust measurement underpinning this, but more transparency is needed before this can be taken seriously.

The analysis uses a combination of market mix modelling (econometrics) to show the total contribution of TV vs. online video, and ecosystem modelling to dig down into the performance of different individual video channels. This is interesting stuff, and makes for good headlines, but it raises a lot of questions. We think there are three reasons to be cautious.

First, we don’t know what the period of research was, or how many brands, campaigns and categories were included. We don’t know what kind of campaigns they were. Brand-building vs. short-term sales-driving, for example. Like a clinical trial, we need to be confident that if we give you the same budgetary medicine, we know what the side effects might be.

Second, we’ve only seen the headline figures (mainly about ROI). This would be a misguided basis to start shifting huge chunks of budget around.

For example, if we spend £1 million on TV and drive £1.2 million in sales, we have an ROI of £1.20. If we spend £10,000 on YouTube and drive £18,000 of sales, we have an ROI of £1.80. This is 50% higher than TV, but is also delivering far less money. The research headlines don’t tell you what would happen to the ROI if you put more money into YouTube. Would it stay at 50% better than TV or would it start to diminish?

Third, the headlines are only comparing TV and YouTube. To do this properly, we need to understand the relative impact of other video channels to. YouTube’s ROI might be higher than TV’s, but how does it compare to the rest of the online pack?

We welcome the industry taking cross-platform video measurement seriously. At Maxus we have an ‘Open Video’ philosophy to setting video investment strategy, and we are developing tools and technology to plan, measure and optimise across different video channels efficiently and effecitvely. We use market mix modelling and attribution to identify the impact of different video channels, and advanced tracking to make sure that we have a common approach to measuring things like viewability, brand safety and inventory quality across video channels.

That’s why we’re asking both YouTube and ThinkBox to put down their sharpened spreadsheets and to back up the headlines with evidence. It’s not a matter of suddenly shifting money from TV into YouTube, but of understanding what the right channel mix is for individual brands based on their needs, their priorities and their audiences.

Entertaining as the ringside seat has been, advertisers deserve a bit better. It’s time for a grown-up conversation about what’s working now, and what’s changing next.

# Alex Steer (27/04/2016)


Saying no to marketing tech's Project Fear

829 words

I wrote this for the Maxus blog - reposting here...

I got an email this morning whose subject line read: 'If you're just keeping up to date in marketing tech... You're not doing enough.'

I get similar emails every day, and so do our clients. They reflect the growing tendency of marketing technology companies to sound like people who are trying to sell you gym membership. Except that rather than muscle-bound personal trainers shouting about rock-hard abs, this assault on marketers' sanity and dignity comes via whitepapers, webinars and other content marketing channels.

In some ways this is nothing new. 'Fear, Uncertainty and Doubt' has been part of the IT salesperson's kit for a generation – and is still, famously, associated with technology giants like Microsoft and IBM as they slugged it out for dominance of the enterprise computing sector in the 1990s. But whereas old-school FUD was all about knocking the competition, the new school is all about knocking the client.

Those of us who work in digital, technology and analytics are subjected to a sustained Project Fear campaign from many technology providers. (Before you write in and complain, there are notable exceptions, of course – but sadly they're notable because they're exceptions.) It's as if, now that marketers are huge spenders in data and tech, many vendors are determined to keep them feeling confused and vulnerable. Despite all the evidence to the contrary, the industry is behaving as if it's a seller's market. The kind of advertising hard-sell that went out of favour in the mid 1960s seems to be alive and well here.

If we as marketers still talked to our consumers the way many tech and data companies talk to us, those consumers would long since have abandoned our brands.

The narrative of Project Fear is consistent: every client who has bought our product has transformed their relationship with customers in ways you haven't thought of yet. You're being left behind. Your customers will abandon you and tough guys will kick sand in your face. Without this gym membership – sorry, enterprise software license – you'll be laughed out of the bar by your peers.

This message is broadcast through social media and the trade press every day. It continues to have power because there are so many topics it can cover. If as a marketer you feel like you've mastered web analytics or ad serving, there's always digital attribution, cross-device tracking or containerisation (don't ask) waiting in the wings. And just behind them are the looming bogeymen of machine learning and the internet of things...

To understand Project Fear – to get a handle on how some marketing tech firms feel so able to harass their customers in this way – you need to follow the money. Despite appearances, this is not a seller's market. There is colossal over-supply in marketing tech and the reasons are structural and come down to one point:

You, the marketer, are not the customer.

Now, again, there are exceptions. Large public businesses like Google, Oracle or Adobe depend for their success on satisfied marketers (in part, at least). But for every one of them there are a thousand marketing tech startups who depend on venture capital funding. VC money works in an entirely different way from marketing revenue. It comes in huge, infrequent waves rather than a steady trickle. It is given, or not, depending on funders' perceptions that a business has fairly rapid growth potential. When your business model is to attract the next big round of VC funding, you need lots of marketers to come on board fast. Marketing spend in this case isn't the big fish – it's bait.

When we understand that, Project Fear makes sense – and the need for change becomes apparent.

As marketers and as the agencies that work with them, we need to start demanding customer service and customer satisfaction. The best technology companies, whose incentives are aligned with our own, will support us in this because they profit when we profit. The rest need to understand that we will not maintain the pattern of scattered, reactive hoarding of technology and data assets that has characterised the last half-decade of marketing analytics and tech.

As an agency we work with clients to help them define their marketing technology, data and measurement strategies. In almost every case we find that there are more tools, more capability and more smart thinking already in place than the business realises. Very often, it's not a case of buying a shiny and intimidating new capability, but of making existing ones work harder and work together. Most digital business transformation happens with software, not because of software.

Saying no to Project Fear means saying yes to a more considered, design-led approach to crafting your technology, effectiveness and data ecosystem. It means embracing the subtler arts of data planning and technology plumbing. Above all it means acknowledging that change comes through teams and partnerships, not bells and whistles.

# Alex Steer (01/04/2016)


The dangers of data dependency

115 words

AdAge contains an article about a 10,000-person advertising research study with one of the least surprising findings imaginable:

The study, which the companies said involved 189 different ad scenarios, found that "viewability is highly related to ad effectiveness"

No, you did not misread that. It took a study of 10,000 people to establish that ads are more effective when you see them.

And in fact, this wasn't really an effectiveness study in any meaningful sense - it was an ad recall study.

So in short, the finding is: You're more likely to remember ads that you've seen than ads that you haven't.

There is such as thing as being too data-driven.

# Alex Steer (12/02/2016)


Buzz and effectiveness

101 words

It's Superbowl day today, so if you work in advertising, expect your social feeds to be full of analysis of which brands 'won' based on online buzz around their ads.

All this is good and interesting, and gets what we do in the spotlight. But don't mistake it for effectiveness.

TV brand advertising works hard - but over weeks, months and years, not minutes. Being famous for fifteen minutes is a good start, but just that - a start, not the endgame.

Social buzz is to effectiveness what journalism is, famously, to history - lively, interesting, but just the first draft.

# Alex Steer (07/02/2016)


Ad-blocking comes from a measurement problem

479 words

The release of iOS 9, which enables ad-blocking apps on iPhones, has caused no end of controversy.

One the one hand, advertising is the sponsor of lots of things on the internet that are free and wouldn't be otherwise. On the other people find online ads sufficiently annoying that they want to block them - to an extend that far exceeds ad avoidance in any other medium.

And annoyingly, both sides are right, which suggests something is broken in the online advertising market.

In fact, it's very clear what this is. Online advertising still suffers from an enormous measurement problem that has led to the proliferation of bad ads.

A vast amount of online ads are still measured on a 'last-click' basis. They are deemed effective only if they are the last thing that drags someone over the threshold to your website, app or online store.

This is, obviously, a horribly flawed way of thinking about how advertising works. To take an offline analogy, this is like saying that if someone sees a big TV ad for a new brand of baked beans; then a great series of press ads; then sponsorship at their favourite sports game; then a PR story about how the beans are sustainably farmed; then goes to the supermarket where there are shelf wobblers pointing him to the brand... then the shelf wobblers should take all the credit if he buys a tin.

This is a problem that has been solved many times over - by marketing mix modelling, and more recently by more detailed digital attribution methods that can see entire customer journeys to purchase, and calculate how important each advertising exposure along that journey was to the final outcome. We've run dozens of mix modelling and attribution studies for clients, and in almost every case, we've found two things:

  1. Clicks barely matter. Seeing ads is what makes people more likely to purchase.
  2. All the advertising people see matters - not just what they see last.

This is not surprising. Yet we're still buying adverts based on a cost-per-click basis, and attributing sales based on clickthrough, because it's easier to keep doing that than to change how we measure and report. Since clicking is an unnatural behaviour, we flood the web with ads in order to get a few clicks, and we reward shrill, intrusive, noisy advertising that leads to clicking, a behaviour that (with the exception of paid search) has almost nothing to do with how advertising works.

No wonder people want to switch off the advertising hose. By measuring properly, understanding which exposures to advertising are effective are worth paying for, we might avoid crashing our own market.

# Alex Steer (19/09/2015)


The M&C Saatchi advertising equation

233 words

Good to see M&C Saatchi's mad PR equation is back in the adland headlines:

After long hours looking at data from Nielsen and Unilever, the Saatchi Institute was able to map the correlation between the ability of a brand to maximise differentiation and minimise deviation. The equation Saatchi proclaimed as "the answer" back in June is the formula for the curve created when the Unilever data was plotted on a graph.

Well, that's more than we got a few months back when it was first shown (with no explanation). It's unnecessarily obscure for a curve equation, though. It looks like a power law equation to me. On the plus side, it's doing a great job of winding people up, a classic Saatchi move.

If I had to guess, I'd say it maybe describes the factors that condition the extent of a brand's ability to steal market share (which normally operates on a power law basis), presumably by balancing differentiation with the minimisation of loss of sales due to short-term factors, like competitor price-cutting. If so, that's a perfectly good basis on which to think about your advertising.

As and when some detail about it actually gets published, I'll be all over it and looking to test it on data from other brands.

# Alex Steer (26/08/2015)


Lift Points: A currency for effective impressions

504 words

This is a quick follow-up to an equally quick Twitter conversation with Faris Yakob about his interesting piece in the Guardian on the currency of online impressions. The piece's main argument is that the assumption that the impression is the currency of attention is faulty:

In order to buy and sell something, we needed a currency. We settled on the impression: one person being exposed to something once. Attention is a complex and analogue aspect of consciousness – its most directed form – which makes it a small part of the most complex system in the known universe. The complex, fundamentally analogue, nature of attention, which has many different facets, is converted into the simple, inherently binary, impression.

The piece is both mostly fair and a bit unfair. There are better ways of measuring attention; they are granular and specific to specific ad exposures; but they're not yet a properly tradable currency for online media.

So what are they? And what should the currency for attention be?

They don't really have a name yet, but they do exist, we're working with them, and my shorthand for them would be Lift Points.

Here's how it works. Using log-level ad-server or site analytics data (the same thing that gives us impressions), it's possible to identify the number, order and nature of exposures an individual has had to online advertising during a time period. This is particularly true if you can deduplicate across devices, tie cookies/device IDs back to real people, and so on. So far, so obvious.

Using sufficiently large behavioural tracking + attitudinal research panels (e.g. Millward Brown's Ignite network), it's possible to tie these granular impressions to well-controlled brand tracking surveys.

Briefly, this means you can effectively regress the test-vs-control uplift in brand awareness/equity/whatever to specific patterns of exposure - creative, site, placement, order, recency, frequency, and so on. By treating this like an attribution model you can assign percentage points of brand uplift to specific factors in the advertising mix. This can be done at a very large scale, and very quickly - and you can use it to isolate the contribution of any factor and give its typical contribution to uplift. And those are Lift Points.

The most obvious - and most easily tradable - would be Awareness Lift Points - the average incremental points of brand awareness delivered by an ad / placement / etc per single exposure. Because ads that are unseen have no impact on awareness, like any good attribution it controls for viewability automatically.

Is it immediately tradable the way impressions are? No, but if used it would quickly build up a tradable market value the way that media owner ratecards or viewability scores do - based on the typical delivery of uplift per exposure. It's also challenging to the economics of the research industry as it means a vast number of very small and fast-turnaround post-exposure test-and-control surveys, but some providers are already moving in this direction.

# Alex Steer (11/08/2015)


From engines to engineering

220 words

Sometimes it's good to be reminded of what really good brand planning does: takes the latent potential of a brand and makes it into an asset, by connecting something obvious about the brand to something important in life.

Lexus have made fairly bland ads for years. They always tried to be about emotion but got clogged with distracting functional claims about the fuel pipelines, the energy efficiency, the power/weight ratio, or whatever. They ended up being forgettable ads about engines.

Their new work seems to have owned up to the fact that, as a business, they clearly get off on technical ingenuity rather than poetry. It's enabled a slight but powerful shift - from ads about engines, to ads about engineering.

It's a feat of subtle brand planning that's given them something that they want to talk about, that is worth listening to.

Rather than another ad about fuel injection, they've built a working hoverboard, and used that as the focus of a film about trying, failing, learning and succeeding. The internet is, rightly, passing it round like crazy, and it's really worth watching. (The craft of the film is also great.)

Well done to everyone involved. I hope it sells you some cars.

# Alex Steer (05/08/2015)


Designing for 'maybe'

508 words

Reading this piece by Simon Law, on the creative challenge of programmatic and adaptive media, this thought stuck out:

We’re all trained to find the best answer – both at agencies and in marketing departments. But the best answer is inherently singular. It doesn't include a set of 'maybes' – so we need to change our attitudes, too.

I worked with and for Simon for three and a half years at Fabric, and it's a principle we put in practice as an agency, testing and measuring creative ideas and scaling up the ones that worked.

But 'designing for "maybe"' strikes me now as a good foundational principle of building analytics functions, as well as creative departments. There are only two components of an analytics team: the people, and the technology. We need to design for 'maybe' when assembling both.

People first. Everybody recognises the need for expertise - you can't really bluff your way in advanced statistics or data integration. But we should be hiring people who approach their jobs as collaborators and inventors, not just as experts. Being an expert is a defensive posture (I'm an expert only insofar as you're not, and I'm an expert in a particular thing...); being a collaborator and an inventor makes you the kind of person who can be approached with a new problem and look for a way to say 'maybe we can...'.

And then technology. More and more of the challenges we face in analytics are problems of technology - its capability and its scale, but also its ability to help us respond to uncertainty. There are a lot of good, powerful marketing technology software products around these days, with serious amounts of data behind them; but most of them exist to make doing certain things more intuitive, user-friendly, foolproof. They offer a set of definite 'yeses', lots of 'nos', but very few 'maybes'. We need technology and software that we can tinker with, recombine and plumb together in new ways to answer original questions - Lego bricks, not works of art.

At Maxus our analytics technology stack is designed for maximum flexibility - scalable on-demand big data warehousing, a lot of SQL, a lot of R and a bit of Python for analytical programming, and build-your-own visualisation layers using Tableau, Shiny and PowerBI among others. It's not always pretty, but it lets us say 'maybe' a lot more, and 'no' a lot less, when we're asked to help solve a problem we haven't encountered before, and get solutions working in days and weeks rather than months. And, of course, we put them in the hands of people who see 'maybe' as a challenge.

If you're buying analytics products or services, look beyond the elegant user interface. Most analytics tools, behind the scenes, involve a lot of people prodding scripts. Ask how open they make the underlying data; ask how locked the development roadmap is; and ask whether they will let you answer 'maybe' to an interesting question you haven't thought of yet.

# Alex Steer (05/08/2015)


RSS