How to use Twitter’s new ‘impressions’ metrics

We published this update to clients earlier this week. I’m reposting it here in case it’s useful to anyone working with Twitter data. If you have questions about this or want to know more, drop me an email.

Within the last few days, Twitter has released a major update to its analytics platform. One of the new metrics included is ‘impressions’ – a count of how many times a tweet was seen. This information was previously available only for promoted (paid) tweets.

Fabric‘s data science team has been analysing the new data for our clients’ accounts. The result is striking: on average, only 16% of followers see each tweet that a brand publishes.

Why this matters

The update means that, for the first time, brands can compare the organic performance of their tweets against their other media – and particularly against Facebook.

Facebook has been widely criticised for the declining ‘organic’ reach of brand posts, which we have been tracking since Q4 2012 and which is now as low as 2% of the total fan base, for large brand pages. Twitter has largely escaped such criticism because there has, until now, been no way of knowing how much free reach a tweet gets. This means it’s been impossible to quantify the media value of a Twitter follower.

This has led to some fairly wild assumptions. A common industry shorthand for ‘Twitter reach’ is the total number of a brand’s followers, plus the total followers of anyone who retweets that brand’s content. This is a hugely optimistic measure of ‘opportunities to see’ that doesn’t stand up to scrutiny: we know that people don’t spend all day glued to Twitter, and they don’t see every tweet from every account they follow.

The average reach of a tweet

For the last two years, Fabric’s data science team has used an algorithm to give a rule-of- thumb estimate of true organic impressions. We’ve suspected that the view count of a tweet is a low percentage of the follower count, just as it is on Facebook.

Over the last few days we have been digging into the new Twitter data for our clients. On average, we’ve found that an un-promoted tweet is only seen by 16% of a brand’s Twitter followers. For ‘reply’ tweets, the figure is typically between 1% and 2%.

Retweets help improve organic impressions, of course – but not by the vast amounts assumed in many calculations. There is no significant relationship between ‘favourites’ and the impressions a tweet gets.

Two metrics that matter

Marketers should pay attention to the new ‘impressions’ metric. It shows how far a tweet really travels and will let brands do better optimisation of the best times of day and week to schedule un-promoted tweets.

The other metric worth watching is the ‘multiplier’ on promoted posts: organic impressions as a percentage uplift on paid impressions. This is the hard currency of social media – the extra reach you achieve as a brand because people chose to follow your account, and retweet your content.

What planning is for

If you care about the question implied in the subject line, read Martin Weigel’s speech to the APG, but especially this:

In a world characterized by constant change and innovation, planning will be knowledgeable about the fundamental principles of marketing and communications.

It is breathtaking how little planning knows about how businesses actually make money, and how brands grow and are sustained. It is equally depressing how uninterested many planners appear to be in any of this today. Planners who find this stuff too tedious, or beneath them, would probably be better off advising production companies, than advising clients on how to address their business issues.

In contrast, radical planning will take a keen interest in how our clients actually make money – in the business behind our clients’ brands.

There’s so much wisdom in the whole piece, but I believe the root of it all is what’s above. Planners should know how to use marketing communications to help increase an organisation’s revenues, by reinforcing and changing perceptions in ways that reduce the cost of sales or justify higher prices. If they can do that, they will still be valuable to people who manage brands, regardless of the kind of organisation they work in.

Fun with funnels

The more I think about the tendency to overstate the importance of ROI in digital and social media channels, the more I wonder whether marketing technology companies are implicitly trying to reshape their clients in their own image.

If you run a software start-up, you really care about sales funnels. That’s because most of your marketing activity is sales activity. You develop a great product, you get feedback on that product and improve it, you generate qualified leads and you sell to them. With technology in particular, a lot of this activity happens online and can be tracked; new leads can be stored in CRM databases; and there is a defined customer acquisition funnel down which you can see those leads moving.

Do you know what doesn’t work like that? Selling margarine.

Yet if you believed the way in which a lot of marketing tech firms talk about selling a £1.50 tub of margarine, though – or beer or jam or deodorant – you’d swear that it was exactly the same kind of problem as selling a £15 million software license. People who sell retail analytics software obviously want you to believe (and tend to believe themselves) that there’s definitely a sales funnel in your category – you just need the technology to help you see it.

The truth is, there probably is no funnel if you sell margarine. You don’t move from ‘aware’ to ‘in-market’ to ‘loyalist’ to ‘repeat purchaser’ in any meaningful way. In fact, there’s excellent data demonstrating that in these categories loyalty (the classic bottom-of-the-funnel effect) is largely a function of market share.

So even if people did buy all their margarine online (and they don’t), being able to track every stage in the customer journey wouldn’t necessarily give you much advantage.

When you sell low-price products to the mass market, you grow share by making many small, weak, positive brand impressions in people’s minds – not by ‘closing’ customers and moving them up a linear sales funnel. That logic works for high-price, high-consideration, infrequent purchases like cars, computers or mobile phones. For cheap, fast-moving goods it matters far more that you know which brand and advertising metrics correspond to sales growth, and that you have a way of measuring which advertising content and media is performing best against those metrics.

The ROI error in social media

From time to time, people ask me how to demonstrate the ROI of social media. For the record, this is my answer.

  1. You should calculate the ROI of social media in the same way as you calculate the ROI of other media.
  2. You should calculate the ROI of social media content in the same way as you calculate the ROI of other advertising creative.

ROI is one of those trump cards that you can always play as a marketer or advertiser whenever you want to sound like you’re being businesslike and focused. It gets played a lot when we’re talking about digital or social because those things feel intrinsically less familiar to many advertisers than other types of media (TV, press or outdoor, for example).

As a result it can become a sort of comfort blanket, a way of saying ‘I don’t want to do this’ that doesn’t involve grappling with the difficult issues of how best to use new media channels to a brand’s advantage.

‘Show me the ROI’ is an unfair question to ask, if you’re not following the two guidelines above. Specifically, people tend to ask more of unfamiliar media than they do of familiar ones.

Do you know the financial contribution to your business of your last press ad, or of that sponsorship banner at the cricket pitch? Can you split out the contributions of the media (the placement and format of the advertisement) and the creative (the advertising content itself)? If you know those things, I am impressed, and you are perfectly entitled to ask the same questions of your Facebook posts or that witty Vine video you published.

If you don’t, though, you should hold your social media and its creative content to the same standard of measurement as your other advertising – not lower, but not higher either.

That means, in practice, that where you have developed proxy metrics for one medium, you should also develop them for your others. Suppose, for instance, you have worked out that reach and frequency for your TV advertising is a valid indicator of future financial performance for your brand; you should apply the same modelling to your social media. If possible, you should do this properly, with your research agency, developing a valid model that will let you say, in future, that you are confident that metrics X, Y and Z are useful predictors of likely sales growth.

In lieu of that, it makes sense to start with metrics in one channel that you at least know are valid in another, if you think that there are good enough reasons to think that the two channels are broadly alike. If reach and frequency matter in TV, they might also matter in online video, for example; if in press, then perhaps in Facebook posts.

Doing that at least gives you a business case for continuing to invest in channels that you believe are important, while you test and prove whether or not they are. One of the weaknesses of social media providers is their enthusiasm for promoting their own rather obscure metrics, which don’t allow for such easy comparison. One of the things we do at Fabric is to help marketers cut through that definitional clutter, to see which oddly-named social metrics in fact have more old-fashioned ones underpinning them: online equivalents of reach, frequency, impressions, word of mouth, etc., that at least stand a chance of being validated.

Finding business models in big data

In July last year, at the height of the big data nonsense surrounding the doomed Omnicom/Publicis merger, I wrote this about the overused notion that ‘it’s what you do with the data that counts’:

Everybody with a bit of common sense in marketing knows this, and could tell you where and when they want to use data more – when identifying specific challenges and opportunities, when taking the temperature of an issue and figuring out a response in real time, when measuring and adapting the performance of creative content mid-stream during a campaign, and when measuring the relationship between short-term activity and long-term brand value and behavioural change.

I think it’s still true, and I think this is a crunch time for the firms that flooded the marketing industry selling the dream of big data. They’re maturing, they need to get out from under the wing of their startup funders, and that means they need to start developing proper business models that attract customers and generate revenue.

Based on what I wrote above, I think there are four business models where big data will make a serious impact on the marketing industry. They are:

  1. Adding good-quality behavioural data into the econometric models used occasionally in strategic planning.
  2. Supplying metrics which can be used to measure and improve the ongoing performance of advertising content and media.
  3. Supplying metrics as inputs to advertising effectiveness research and brand tracking.
  4. Increasing the speed and breadth of information-gathering during crisis/reputation management situations.

None of these things – marketing strategy, media measurement, advertising research and crisis management – is new. Extra data and new technologies improve them rather than transforming them. Because of that, the data has to be additive: it needs to let marketers see or do what they saw or did previously, but better.

The winners are likely to be organisations who understand what data marketers already have, and how they use it: management consultancies, media agencies, and research agencies in particular. The most successful start-ups will target these relentlessly and concentrate on being useful rather than sounding smart. This is likely to be a bumpy transition for firms who have thrived on VC money rather than customer revenue for so long.

As the market matures, biggest red herrings will be the things that today sound the whizziest: ad-hoc predictive modelling, sentiment analysis, customer lifecycle modelling and automated adaptive advertising. All of these sound (and are) extremely smart, but they’re a bad fit for what brand marketers want to be able to know and do, and how consumers behave in most categories. They make for a good sales pitch but a high-risk business model.


Time I got back into this blogging lark, isn’t it?

That’s going to happen both here, and at a Fabric product blog we’ll be starting soon. It’s been a busy few months, during which we’ve launched our product, brought on new some major new clients, and started nailing down our positioning, moving out of stealth mode and become a serious part of the marketing data landscape.

If you don’t know Fabric or what we do: we help global brands use data to make the most of their digital content. Our main product is a web app that helps marketers see how their digital content is performing. We’ve been quiet for a long time while big data’s been going through its hype phase, but we’re now collecting a billion lines of data a day for 150 brands in 25 markets, so it’s time to make a bit more noise.

In a polite British way, of course.

Revisiting the future of social networks

(Long read: 1,400 words.)

Social networking may be ones of those areas where change is the only constant, but over the last few months there have been more reasons that normal to think about how it has changed, and where it is heading.

Facebook, now a 1.2 billion user network, recently had its tenth anniversary and invited users to create ‘lookback’ videos showcasing the time they’d spent on the platform. At around the same time, an earnings call showed declining (though still dominant) usage among teens. Not long after, it acquired WhatsApp, a single-minded messaging app whose founders proclaim, in a sign taped to their desk, ‘No ads! No games! No gimmicks!’ But WhatsApp’s new owner has spent the last two years commercialising its platform in the wake of its IPO, making it near impossible for brands to reach audiences without paying for ads. Facebook and Twitter remain the big-league networks but, with 240 million monthly active users (and its own recent IPO), Twitter has been almost caught up by Instagram (with 200m), and overtaken by the professional network LinkedIn (with 259m+).

So what do we make of all this, except to whistle ‘The times they are a-changin’?

In 2011, working at The Futures Company, I set out to apply a scenario planning approach to the noisy world of social networking, in a report called Status Update. Starting with the idea that user preference would be the the long-term critical success factor for any network that monetised its users, we mapped the user choices that seemed most uncertain at the time – those which could go either way.

We identified six. In honour of the term used to describe fundamental changes of direction within tech startups, we called them Pivot Points and build plausible future scenarios for social networking from the trade-offs they implies. They were:

  1. Scale. ‘Big Net’ futures where users prefer large networks; ‘Tight Knit’ futures where they prefer intimate ones.
  2. Privacy. ‘Open Hand’ futures where users willingly disclose personal data; ‘Closed Fist’ futures where they prefer to remain anonymous and control their data.
  3. Focus. ‘One for All’ futures where a few multi-functional networks win; ‘One for Each’ futures where many functionally specific ones are used.
  4. Time spent. ‘Turn On’ futures where users prefer to be always connected; ‘Tune Out’ futures where usage is occasional.
  5. Utility. ‘Plug’ futures where social networks are utilities; ‘Play’ futures where they are entertainments.
  6. Worldview. ‘Challenge’ networks where users are exposed to many differing perspective; vs ‘Confirm’ networks dominated by the reassuring and familiar.

Three years on, how did these uncertainties turn out? Which are resolved, which are still open – and what’s new?

Looking back – Pivot Points in review

The scale question is largely resolved – users still incline towards big, popular networks. The dip in Facebook’s use among teens is a slight counterfactual to this, but this is more motivated by the ‘focus’ and ‘utility’ drivers than by scale or privacy, as the excellent recent Pew research demonstrates. Teens are using Facebook less because maintaining a presence there and checking up on friends is too burdensome vs the quick clean interactions enabled by Twitter, Instagram, Snapchat etc. They are less concerned about privacy than they were a few years ago, because they feel better able to manage it. Scale of networks is not a significant factor. In general, the promise of small, intimate networks was not fulfilled – remember Color, Path or Diaspora? (Thought not.)

The privacy debate has moved on significantly. On the one hand, there is high expressed concern around third-party access to personal data held in networks – by spies and insurers more than advertisers. Yet behaviour around the broadcasting of personal information has become more permissive, with the growth of networks that depend more on public broadcast models and less on intimate sharing with friends (e.g. Instagram, Tumblr, Twitter, or YouTube as a subscriber model for video blogging). As noted above, we’ve got better at managing and negotiating online privacy – a trend mapped by the excellent work of Danah Boyd and Alice Warwick. We’re worried less about our bosses seeing last night’s photos, more about spooks putting together metadata profiles about us that we can neither argue with nor control.

The areas of focus and utility have seen some dramatic changes and to some extent converged, as the WhatsApp acquisition exemplifies. Younger users, in particular, have rushed in the direction of networks that do one thing, quickly and well. Twitter and Instagram have also been beneficiaries here. This is driven, though, by a desire for functional efficiency, not for conceptual leanness – and this in turn is likely prompted by the expectations of users who increasingly use mobile devices as their preferred means of access. So Facebook has done well by splitting its user experience into different mobile apps (e.g. Messenger, Paper), while Twitter and WhatsApp have thrived by avoiding distracting bloat – and Twitter has aggressively killed off third-party apps, stripping the user experience back to basics and re-engineering its platform to optimise for speed. Even Facebook on desktop has focused down on the news feed. And LinkedIn has thrived by being all about business networking (stalking in a suit and tie).

The time spent debate is, similarly, largely over. As mobile becomes the dominant way of interacting, so always-on becomes the expectation. The idea of the ‘digital detox’ is more talked about than done. As seen above, this has become a contextual driver – the idea of very regular connection is so embedded that it conditions the shift towards more focused networking applications. There are, though, probably some niche opportunities for the app equivalent of the ‘slow food’ movement, targeting jaded thirtysomethings (ahem) who have run out of things to say, who notice their friends have too, and who may want a less persistent, more occasional relationship with their networks. This will remain a counter-trend, though.

Lastly, worldview. Despite the early promise of networks like Quora, we prefer to be in the company of the like-minded, to the point that Facebook has had to act to squash the runaway virality of ‘social news’ sites like Upworthy which trade on human interest stories and other ‘link-bait’, in favour of giving breathing room to more ‘serious’ organs of the press.

Looking forward – revised Pivot Points

So thinking ahead over the next few years, what would we carry forward? I’d suggest that the issues of scaleprivacy and focus will remain relevant, but I’d briefly reframe the uncertainties as follows, and add a fourth which is new.

  1. Utility. Will we want to carry our identity and preferences across networks (‘Passport’ futures) or keep them separate (‘Padlock’ futures)?
  2. Privacy. Will we want tighter controls over third-party data access (‘Speak No Evil’ futures), or limits on what networks know in the first place (‘See No Evil’ futures)?
  3. Focus. Will we want networking applications to be provided by a few big companies (‘Big House’ futures) or by many small ones (‘Small Holding’ futures)?
  4. Discovery. Will we want to discover content based on the people we know (‘Connected’ futures) or the things we’re interested in (‘Curated’ futures)?

The fourth one is a genuinely new addition because, even in 2011, the use of social networks as channels for large-scale discovery of news, information, entertainment etc. (broadly, ‘content’) was in its infancy. Since then it has exploded, but its terms have changed. Facebook, in particular, has switched its focus from the social graph (whom you know) to the interest graph (what you like) as a way of serving content and monetising users. This has been a largely unstated change and is generating a backlash from users, as this recent wildly popular post shows. But even while users are demanding their social graph back, they are making more use of interest-based networks (e.g. YouTube subscriber channels, Instagram, Tumblr among teens, Pinterest among young women). In either case, networks will need to be straight about the grounds on which they enable networking and discovery.

As before, these are not predictions, just signposts. These new Pivot Points are more commercial in orientation than before – more about business models and ownership of data, less about the specifics of features provided. Assuming that doesn’t just reflect my interests (maybe), I think it indicates the growing maturity of the category, and users’ growing awareness of the need to come to an accommodation with what are, after all, businesses trying to make money out of them. If these readings are valid, then even while social networking services become a more established part of everyday life, the business environment could get tougher for those who provide them.

Thanks to Andrew Curry for the invitation/prod to revisit the Pivot Points.

Using real-time data in a crisis

Last week I presented to the Market Research Society on using real-time data in crisis management situations. I’m putting it up here in case it’s useful to anyone who finds their brand melting down.

As with all my presentations, at least half of it is pictures. So here’s a quick rundown of roughly what I said:

- This is a presentation about process in a real-time environment. Nobody gives out awards for process. But everybody wishes they had one when they get to their desk and realise their brand has gone from ‘fine’ to ‘critical’ overnight.

- I’m a strategy director at Fabric. We’re a creative optimisation agency backed by WPP – we help brands use their data to deliver creative advantage. So we spend a lot of time helping clients bring their data together, measure what matters, and find simpler ways to consume and share it, in real time (or very close).

- We work with over 150 brands in 25 markets, and with some genuinely global clients. Because we’re focused on helping clients use their data better, we do a lot of work advising on capabilities as well as measurement – how clients should work with their data more effectively. And a lot of that, these days, is about using data fast, including crisis management.

- A real-time media environment – one that’s fast-moving and constant, with lots of participants (like social media) – has lots of interesting new ways of putting brands in crisis. And I really do mean brands. Lots of businesses have good crisis management capabilities, but they lie in corporate communications or legal, not with brand teams.

- Even great brands can suddenly find themselves in an unfamiliar world of pain. Sometimes it’s your fault, sometimes it’s not. It’s easy to panic when you most need to be calm.

- For the first time, data moves (almost) as fast as a crisis does. Good use of data during a crisis requires the discipline of research at the speed of social. No easy task.

- Getting it right isn’t just about having data, it’s about being really, really diligent and organised in how you organise and use it, from the start. That kind of discipline can keep you out of crisis, and help you deal with it maturely and quickly when a crisis happens.

- So, five tips for using data effectively in a crisis…

Keep perspective. Know how big a problem is, how fast it’s moving, and how big your response needs to be. Know what your blind spots are when it comes to measurement or listening. Use data to stop people from panicking.

Measure from the start. Know what the problem is, how it affects you, how you measure the damage and how you measure your recovery. Do that at the start. Set some key performance indicators and keep everyone focused on today’s task.

Sort out your chain of command. It’s probably not the same as your normal approvals process – it may need more senior people, but may also need to be shorter to get things done quickly. Know how you’re going to communicate with your crisis team, do it consistently, and keep it simple. Know when you’ll escalate, and who needs to know what, and how often.

Set stages and gates. Work out which order you need to solve problems in. Set threshold measures that you’ll monitor every day, so you know when you’ve moved from stage one to stage two of a crisis response, etc. (what Churchill called ‘the end of the beginning’). Use data to let everyone know how far you’ve come – and what’s left.

- Know your exits. Seriously, a crisis can feel like it will last forever, but it does end. Don’t get addicted to being in a crisis. (It’s easy to do – when you’re back’s against the wall, every move feels important.) Know when and how you’ll move on.

Making ad retargeting less annoying

Yesterday I wrote this post about a book in which I have a chapter, which meant going to this page to get the link. I obviously didn’t buy the book because I already have a copy, not that I expect the internet to know this. And behold, today in Facebook’s sidebar I see this ad:


Which is sort of fair enough, and sort of not. It got me wondering, how do we reconcile these two truths?

  1. Ad retargeting is effective.
  2. Ad retargeting is annoying.

As I said, I can’t expect the internet to know that I own a book whose page I visited just for the purpose of getting the link. So by the logic that says This person looked up a book, then didn’t buy it, it’s reasonable to infer They may act when given a second opportunity to look at the same book.

But on the other hand, the same inference is not reasonable. There are many, many reasons why someone may have visited the page for that book and not bought it (disinclination, lack of time, vanity, blue book cover fetishism, etc.). Some of these are more likely than others, but in all cases the outcome is the same: a person had the chance to buy the book, and didn’t.

Straight retargeting is the online manifestation of a mindset that says: I heard your first answer, and it was ‘no’, and I’m going to keep nagging you. That is the equivalent, in offline sales, of you popping into your local bookstore, looking at a book, putting it back on the shelf, leaving… then being phoned several times by the bookseller saying, ‘Did you want that book?’

And beyond the world of sales, there’s a word for people who don’t realise that ‘no’ means ‘no’.

Look, retargeting works well relative to other forms of online display media. But it works best when there’s been a genuine signal of intent to purchase, such as adding a product to a basket. The evidence base is patchy, but according to this 2011 study, 71% of online shoppers abandon baskets – but 75% of those come back, typically spending 55% more than direct converters, and the uplift from retargeting ‘basket abandoners’ within 12 hours is around 15-20%. So a nudge in that critical period reflects a normal behaviour and can be useful. But even in this case, the conversion rate from retargeted display ads to basket abandoners is still only 0.3%.

Now imagine how dismal the uplift will be on retargeting people who have just visited a product page. Yes, it’s better – but it’s better than something really bad. The reason it’s better is because you’re applying a segmentation over your ad inventory – albeit a fairly dumb one. People who have looked at a book are certain to be more likely than average to be interested in that book.

A recent (Dec 2013) survey on retargeted ads found that 38% of people found them offputting, in addition to the 46% who ignored them and the 16% who claimed to have been prompted by them. Even if we take this optimistic 16% figure (rather than the 0.3% conversion rate from the SeeWhy study), that means that retargeted ads annoy more than twice as many people as they win over. Not surprising, as 53% in the same survey said they had privacy concerns over retargeted ads.

So why not take the ‘stalker’ factor out of retargeting? Product-view data gives you a very simple segmentation, if you can be bothered to connect the product back to its category. In this case, all you need to know is that the book I looked at is a book about marketing. Then you can target me with other marketing books. I’ll feel a bit less creeped out, and you’ll still outperform non-targeted advertising because you won’t be serving me books on stuff I don’t care about, or marketing books to people with no interest in marketing. You’ll also be able to switch tactics – if one book doesn’t grab my attention, another one might.

I’d love to see some data on whether this is any more or less effective than same-product retargeting when served to people who have given no intent signal. But it’s got to be less annoying.

Plug: Multichannel Marketing Ecosystems

This post is a shameless plug for this book, the catchily-named Multichannel Marketing Ecosystems, which may not make the New Year bestseller list, but which does contain a chapter by me and Chris Perry (CEO of Fabric).


Despite sounding a bit science-fiction-y, the book is a collection of essays by people working on the problems associated with trying to plan and execute marketing campaigns that exist in lots of different channels, to varying degrees of breadth and depth, and whose audiences may encounter them in whole, or in part, and in any order.

Our chapter – the alliteratively-titled ‘Making money with metrics that matter’ – argues that multi-channel marketing requires an approach to metrics which goes beyond simple conversion funnel logic and that brings channel-level analytics more thoroughly into the domain of marketing strategy. A marketing strategy should be clear on the role of each channel, and attach meaningful metrics and goals to each channel (not just a ‘bottom line’ of brand equity or sales metrics), that do not depend on a channel being encountered at a particular point on a journey. This understanding should be shared by all those accountable for the strategy and not merely by analysts, and information about channel performance should be used to optimise and where necessary re-organise the channel mix. This idea - know what you’re trying to do, where, why, whether it’s working, and when it’s not, why it’s not – isn’t rocket science but requires a serious and shared commitment to measurable standards of effectiveness from everyone in the marketing mix. That’s more challenging, and more rare, than most of us like to admit. For agency types, for example, it means choosing the metrics by which your work will be judged in advance - not waiting to see which ones look best in the wash-up.

The book is edited by Markus Ståhlberg and Ville Maila, and is published by Kogan Page.