Crossing the Channel: Assessing the Value of Data Overlays for Programmatic Buying

0
37


The last edition of Crossing the Channel looked at the main players in audience targeting. Now we’ll explore a rational approach to comprehending the value of the data overlays some of those players provide.

When it comes to Real-Time-Bidding-based programmatic buying — defined here as a marketer’s ability to make a choice on each impression rather than having to buy impressions in bulk — targeting is the name of the game. If the platform that executes your buying — commonly known as a Demand-Side Platform, or DSP — makes a choice about bidding on every single impression, then the more information you have about those impressions and how relevant each is to you, the better your choice can be.

In an increasingly liquid environment where every bidder has, in theory, an equal opportunity to win a bid on an impression, information creates asymmetry. If you have access to detailed information about the value you would place on each impression, you can gain an advantage over your competitors by bidding only on the impressions most relevant to you.

Data Overlays
In the world of programmatic buying, such information comes in the form of data overlays, which are usually provided by a DMP. Any advertiser that has dipped its toes into the world of data overlays knows that there are many choices, and that these choices can vary dramatically by vertical.

But how valuable is the data overlay to your overall performance? Or, more simply, is the price of data appropriate in light of your overall performance metric, whether you’re looking at CPMs, CPCs, CPAs or some other KPIs? The question may look simple. But segmenting the price for each data/technical vendor on each buy can prove challenging. To make this exercise easier, structure your campaigns from the beginning in a way that will enable you to segment the information out in the easiest way possible. Such structuring is even more important if you operate in a sector where vertically relevant data is scarce and expensive.

Here are a few basis rules to keep in mind:

1) 1st Party Data Is King

You will find the greatest asymmetry — and thus the biggest advantage — in your own data, because you are the only one who has full access to it, and thus the only one who can assess its value. This kind of data may be used to retarget users who have visited your website, to segment your CRM database and integrate those results with your DSP, etc.

2) Generic Demographic/Income Data Should be Coupled with Geographic Info

This kind of data is most effective when it is segmented geographically. Typically there are many sources available. It’s worth testing a selection of providers on your vertical in order to learn which will give you the most relevant information. Age- and gender-based data is not necessarily the most informative when it comes to RTB, but it can help you identify consumer segments that are responsive to your messaging online, and can be ported easily to your TV buys, for example — because age/gender targeting is easy to replicate across platforms.

3) Interest/Vertical-Specific Data–Great but Pricey

This is where data can get both intensely relevant and quite expensive, if you operate in verticals where relevant sources are scarce. When assessing this type of data, the structure we outline below is crucial to understanding performance.


Then you’re ready to consider the structure of your campaign.


1) Structure Your Strategies Using A/B testing principles

If you want to test several data sources, do so by creating one strategy using one data source. Then replicate it, implementing the replicated strategy via the second data source. DO NOT mix several data vendors in one strategy, because doing so will prevent you from isolating each vendor as a separate cost/performance variable.

If you want to understand the value of each vendor as well as the value of not using any vendor, create a control group. For this group, replicate the strategy without using a data vendor.

2) Maintain Clear Naming Conventions

I cannot stress this one enough: The best, easiest, and cheapest way to analyze performance is to use clear naming conventions for each strategy, coding every data/tech vendor used for the strategy. This is the best way to segment every cost variable involved in your strategy, and isolate the price/performance variables in your performance equation.

3) Keep a Clear Pricing Sheet for Each Vendor

In order to calculate the price for the data/tech vendors for each strategy, you’ll need to match the price paid for the service with the code for the vendor as outlined in your naming convention. This is key to your cost calculation, so keep it current.

Once you have isolated each cost variable in each strategy, you can analyze performance versus price at every level, comparing strategies versus control groups and assessing the value of each piece of technology/data versus the performance of the control group.

Now you’ll know what each of your data overlays is worth. And you’ll finally be certain — in a way that very few are — that every impression you buy is made with the best available information, and the information most relevant to you.

__________________________________________________________________________

LEAVE A REPLY

Please enter your comment!
Please enter your name here