INSIGHTEXPRESS – Let’s face it: online ad effectiveness research has taken a beating in the past 24 months. With the number of critics of the methodology increasing, it’s a wonder measurement is occurring at all.
But in light of all this criticism, let’s not forget to take stock of the fact that on the vendor side we’re not sitting idly by waiting for the market to disappear. In fact, at InsightExpress we’ve spent the past eight years innovating and pushing the ball forward in ad effectiveness research. Most recently, we introduced the Ignite Network, an initiative that’s been years in the making.
But we’re not the only company launching new ad effectiveness products. Everyone in this space is getting into measurement and revising their methods – which can be crazy confusing for clients. So the real question is: how can you evaluate what’s happening in this space and determine who’s got the better mouse trap?
Along those lines, I’m introducing a new column specifically targeted at understanding the evolving ad effectiveness space and all the new mouse traps available. Yes, I will be talking about competitive offerings, and there are people here at InsightExpress who think I’m crazy to do so, but my goal is to help provide context (and, of course, since I believe that we’ve built the best mousetrap in this space I’m not afraid of comparisons).
Also, it’s probably important to put some context around how one should properly evaluate vendors and methods in online ad effectiveness. Dr. Paul Lavakras put many of his concerns about historical approaches on paper, which serves as a good starting point for understanding the issues.
But let me take things up one level and put this discussion into context given where the market is today. Here are the topics we should cover to fully understand how to evaluate online ad effectiveness research:
- Online ad effectiveness research is… research.
- Experimental design is the Holy Grail, but almost impossible to achieve.
- Experimental design part II
- Cookie deletion is rampant.
- The crisis of control cell assignment.
- Is more data better? Does response rate matter?
- Sample representativeness is important.
- Does the immediate effect have any value?
- How does live recruit differ from panel?
- In panels, does the number of panels matter?
- Does attribution modeling make sense?
Obviously, I’m not going to tackle all of this in one post. To kick things off, let me tackle the most obvious item on my list of important issues – online ad effectiveness research is… research.
Sounds like a “no duh” statement, doesn’t it? Yet it’s a very important point to make. Research answers many questions and tells us many things, like the population distribution of the country by age or the fact that 34.6% of the population has never seen the ocean.
How many of your vendors have actual researchers on staff looking at your results How many vendors are technologists or project managers vs. methodologists? It’s a more important point than you can imagine.
Our space has become extremely cluttered these days, and many of the new entrants are companies that build technology not do research, or conversely they’ve never done ad effectiveness research or done full service research.
Those of you who have ever seen me speak on the topic of research analytics know I adamantly believe that research only survives where there is an equal focus on 21st century technology and proven research expertise. For that reason, I’m proud of the InsightExpress team from both a research and engineering perspective.
But if you take a look around the industry today at all of the companies competing in this space, there are a large number of them that are pure technology vendors. They’re the typical start-up that relies on flashy and well-built technology to sell their product. But here’s the catch: these companies tend to focus on keeping margins slim so there’s no room in the budget for researchers.
Translation: there’s no room in the budget to ensure the data is correct.
This is a big problem; the designs we employ to measure online ad effectiveness are experimental designs (more on that later), designs which are notoriously susceptible to biases without constant scrutiny of the data. We know this because we’ve done well north of 2,000 of these studies, not to mention our clients have pointed out where some of their vendors report incorrect data.
Look at it this way, I already told you that 34.6% of the population has never seen the ocean, but you shouldn’t believe me on that stat. I came to that number by asking 10 of my friends what percent of the population they believed had not seen the ocean. Doesn’t sound like a great method, does it?
But ask yourself this: did you believe me when I first mentioned this stat earlier in this post? If you did, you’re like everyone else on this planet in that we all love to take statistics at face value. A researcher’s job is to a) not take numbers at face value, b) make sure when they create numbers they can “show their math” as to how they got to those numbers.
So for Pete’s sake, please ensure that you’re working with a researcher when you’re buying online ad research. Because, no matter how cheap a research product might be, paying for the wrong data is a rip off at any price.
Republished from the InsightExpress’ InsightfulAnalytics blog with permission.