If you had a pound for every minute spent discussing the plight of PR evaluation, you’d have an agency bigger than Deloitte, Accenture and KPMG combined.
History of Measurement
Back in my early days of agency life, when carrier pigeons delivered releases and research equalled going to the library to trawl through microfiches, showing a wire-bound, cuttings book with circulations and OTS was enough for most clients.
But, over time, the need to prove greater value from our efforts, often against shrinking budgets, forced many of us to delve deeper.
Whilst we tinkered and tailored, the same question kept being raised while the industry continued to self-flagellate over the paucity its response: “We love what you do but how can you prove your value to the business?”
Today we have access to highly sophisticated, analytics software.
Larger agencies are sometimes running up to five different platforms to produce increasingly more complex reports. Many smaller agencies, which can’t afford analytics subscriptions, are cleverly blending quantitative and qualitative research with media outputs as evidence of ‘change’.
But if the client won’t cover the costs, it’s down to the agency to invest in proving the value of its creativity, often utilising a suite of free tools. The result is a picture not too dissimilar to me over two decades ago, creating a cuttings book with a ruler, scalpel and glue and using a Casio FX-550 to work out the OTS.
Just last week, reporting on average PR salaries “plummeting by 7% as the sector grows”, PRCA director-general, Francis Ingham, was unequivocal in identifying both the problem and the solution to a talent drain impacting the industry: "Our long-standing belief is that evaluation is the answer to this problem. Until we can prove the value we bring, we will not be able to charge the price we should, and so pay people the amount they deserve."
Hell, it’s got so bad there’s even a Measurement Month staged by AMEC – the “International association for the measurement and evaluation of communication”.
It looks like a world tour, devoted to helping the industry get better at telling people how much better it’s got at evaluation. Whilst admitting it’s got a long way to go before it catches up with clients and our richer cousins over in advertising.
Same shit, different day.
Bored of once more reading how crap we are at proving our creative worth, I turned to the greatest gathering of creatives for enlightenment.
Cannes Lions is fast approaching, and its awards are rightly coveted by agencies and individuals as their impact can deliver enormous business benefits.
Research from the Institute of Practitioners in Advertising (IPA), categorically proved that award-winning ads are 11 times more effective than those that don’t. How was ‘performance’ measured?
They looked at the impact of the ad on a variety of business metrics including market-share growth, sales, profits, return on investment, likability and emotional appeal.
Of course, the majority of PR firms will rightly say there is not a cat in hell’s chance of evaluating their work against any of the above measures.
Which probably isn’t a bad thing if the majority of output is media coverage. Many will rightly point to a lack of budget to evaluate effectively. Most will say the majority of clients aren’t interested in investing in independent measurement. And still too many will highlight the inescapable truth of client systems being ‘off-limits’ to PR.
But, as disciplines continue to blur (even if budgets aren’t), and as PR continues to strengthen its creative muscles, there must be things we can learn from analysing the results of 25 Titanium and Grand Prix winners from last year's Cannes Lions. Right?
Hmmm, we’ll see. Titanium Lions, in case you don’t know, were created to celebrate marketing work that doesn’t fit neatly into traditional categories. Winners included Palau Pledge, KFC to FCK, Bloodnormal and Nike’s, Nothing Beats a Londoner.
The Study And Its Results
For the analysis, I set myself a few crude, but obvious, measures such as (a) is a tangible brand metric included in the outcome? (b) did the campaign drive sales? (c) was attitudinal change recorded? (d) did the campaign deliver behavioural change?
I then recorded every measure mentioned, grouped them all together, split them apart, colour coded them – you know, the usual. It was a quiet day at the office.
Here’s a selection of what I found:
Just TWO campaigns included sales uplifts in their outcomes
Although I’m not sure it counts when It's a Tide Ad “helped launch a new line extension which experienced 35% sales growth”.
35% growth from a standing start could mean anything. But it’s a start. And I’m with the Banned from Spotify drove an increase in streams for the Muslim artists featured.
So, an increase in sales. Sort of.
An analysis of last year's Titianium Cannes Lions winners shows that eighteen of the twenty-five top winners used impressions and reach to show the impact of creative, with 6 using AVE!
Only THREE out of 25 winners included brand metrics, metrics which I see as being tangible to delivering a higher brand benefit.
Dundee: Son of a Legend stood out as one of the few entries across the board to state tangible results. The campaign drove an 83% increase in "intent to book", helped deliver a 20% organic lead conversion rate versus 2% historically and created dramatic spikes in traffic for their airline partners.
That’s it. Three.
SIX campaigns included Advertising Value Equivalent (AVE) as a measure of success. WHAT? You're kidding right? Nope. “AVE $84.4m”, “$67m earned media”, “USD 22.4m”.
When the PR industry together with its clients has been calling for an end to AVEs for about the past 20 years, why, oh, why are they still being included? Pointless and meaningless. But hey, they helped win awards.
EIGHTEEN of the twenty-five top winners relied pretty much entirely on impressions and reach to show the impact of creative.
I don’t know about you, but impressions to me are like OTS and AVEs.
Meaningless as a measure in their own right and still pretty much meaningless when integrated with other measures but they were by far the most popular metric to be included.
“7.9m impressions via print media”; “1.2 billion media impressions”; “4.3bn media impressions” and “2.35 billion impressions in 59 countries in the first three weeks.”
Other Notable Findings
Only ONE winner stated their work drove click throughs - Nike’s Nothing beats a Londoner. 171,000 if you’re interested.
- TWO - The number of times Burger King claimed greatness for its campaigns. Which is fair enough for Scary Clown – “the most effective and engaging full-global activation in Burger King's history.”
- THREE campaigns claimed to have delivered attitudinal change.
- FOUR – ish. The number of campaigns which had some evidence for behavioural change.
- THIRTY - AT LEAST - The number of different social media measures included. From “social media share 219m” to the ever popular “trending on Twitter” and “690,000 likes, shares and comments”, “1.9bn social impressions”, the ability to use social metrics to signpost success is mind-boggling. Number of campaigns which stated how much was invested in boosting the social numbers? None – that were published at any rate.
Remembering that an award entry is there to sell itself and doesn’t always include every measure of business success, what, if anything, can we conclude? The jury’s still out but from what I can see I think there are several take-aways.
- PR industry bodies need to invest in researching the value of PR outputs – much like the IPA continues to do for advertising. We don’t need more frameworks; we need proof points.
- We need to switch the conversation from bemoaning the sorry state of PR evaluation to continually showcasing the impact of the very best, creative campaigns to drive value up the chain. And probably stop talking about impressions while we’re at it.
- Every budget should have independent M&E included as a separate line item. Clients not willing to pay for it should stop questioning PR’s value and agencies that don’t pay enough attention to proving value should stop moaning about their clients moaning about evaluation.
- The majority of creative work is driven by short-term needs which, in turn, delivers short-term tactics which, all too often, end up driving value down. For PR to improve its reputation as a business grower, its work needs to be judged over longer time periods – anyone not acquainted with the seminal piece of work that is The Long and the Short of It, needs to be.
- An industry that continues to point the finger at its own inability to prove value should stop having awards called “best low budget campaign” as it champions low investment.
In many ways the analysis gave me some comfort. It’s easy to think that someone else has all the answers when the reality is that we’re all in the same boat, still trying to find the silver bullet where no silver bullet exists. Analytics may have replaced the cuttings book, but the answers to the critical questions still remain and maybe it’s time we finally got used to it.
Notes On The Analysis
Whilst I hold by the above, there are always going to be questions over analysing outcomes. Especially when quite a few of the award entries didn’t give any hard data whatsoever.
I tried to discount wild or inaccurate claims and ignore anything which pointed to future benefits from the work such as “we believe the campaign will generate $250m over the next 12 months”.
So, I took what I could find, read between the lines, did a bit of detective word and filled in some blanks and where hard data was provided, I kept to it.