Most online ad effectiveness studies are designed to measure the branding impact of the campaign. This is because while the internet is uniquely designed to directly measure transactions, it’s not well designed to solicit viewer engagement and/or opinions about advertising. Many of the campaigns that we find ourselves measuring are specifically designed to understand who saw the advertising and whether that exposure turned into increases in brand funnel metrics.
In practice, the invitation to the ad effectiveness survey is served immediately after exposure to the advertisement. Technically speaking, it’s a little more variable than that — for example, a typical DHTML invite is triggered on the page following the one where the ad exposure occurred. Or with short form research, the ad shows up in the ad space, effectively hiding the ad after a fixed length of time, usually 30 seconds. And it gets even more complicated than that, as I’m sure plenty of people will point out to me. However, suffice it to say that the event of inviting someone to take a survey is typically triggered within minutes or seconds of online ad exposure. It’s really no different than my Nike example.
So now you have to ask yourself the question, “Is data collected immediately after ad exposure useful?” I’ll save you the thinking and simply say yes. But that yes is a qualified yes, as is pretty much everything that’s said in market research. Let me explain the qualifying factors, comparability and decay.
If you’re measuring online in a vacuum, it may be that assessing the immediate response to advertising is a decent approach when evaluated using a good experimental design. However, we rarely find a brand that runs advertising only online. Typically, advertising is run across multiple media channels, and most advertisers want to benchmark the advertising effectiveness across them to answer questions about creative efficiency for each media. So when you compare online — which is evaluated within minutes of exposure — to TV — which is typically measured within days of exposure — you set up an apples and oranges comparison.
Most of the brands we work with dislike online measurement specifically for this reason, since to them it seems as if the methodology is designed to make online advertising look good. Considering that the research is often funded by people who want online to succeed (media publishers), they look at the methodology and discount its value in assessing the impact of their campaigns. As a result, even though there is technically nothing methodologically wrong with such a short timeframe on the measurement end, client advertisers are skeptical of the approach.
Our other qualifying factor is ad decay. If you’re measuring the impact of advertising immediately after exposure, it’s impossible to understand how quickly or slowly the memories of that advertising fade. Understanding the decay rate of your advertising is imperative to determining optimal frequency and implementing a flighting plan for your campaign. Ironically, I don’t know of anyone that regularly takes ad decay into account in their online ad effectiveness measurement.
Late last year we hosted an ARF (Advertising Research Foundation) webinar on cookie deletion and ad decay and presented some of our initial findings on the topic of ad decay. If you’re an ARF member, you can find a replay of the webinar at their site. But for the rest of you, let me summarize what we found. It may seem like a no-brainer, but online advertising does indeed decay over time. In the data we analyzed, we found that over a 96-hour window, a metric like ad recall (the key metric used in brand trackers to evaluate TV impact) decayed 12 percent. This is a big deal, especially when you consider the implications on comparability. Without measuring at differing intervals post exposure, it’s difficult to impossible to determine ad decay.
While short-cutting methodology for the sake of simplicity may be enticing, let me also point out that short-form research can be plagued by technical glitches in how the survey invitations are delivered. The best practice for short-form invitations is to trigger the invite to cover the ad being measured. Bydoing so, you effectively prevent the respondent from cheating by just looking at the advertisement on the page. While this might seem like a simple task, believe me — it’s more complicated than it sounds, especially when you think about serving that survey invitation across the hundreds of pages in a typical ad campaign. Take, for example, the ad below for American Family Insurance that ran on Parents.com. Notice the invite to the survey and how it’s not covering the advertisement. While I can’t say first-hand, I’m pretty sure I can say that based on this glitch, the campaign was likely a success.
Taking a New Approach
Measuring the immediate impact of advertising as well as the decay rate of that advertising is a fundamental analysis that we believe needs to be part of how you evaluate and measure your advertising’s effect.
So, would I recommend immediate measurement of online campaign effects? Well, of course, as long as you’re not concerned with comparability or decay.