ADOTAS – In the fast evolving lexicon of display advertising, you’d be hard-pressed to find a pair of terms more at odds with each other than performance (i.e., advertiser ROI typically measured against CPA or CPC goals) and delivery (total spend/impressions or clicks/conversions). For most campaigns, there’s a sharp trade-off between performance and delivery. The more conversions (delivery) you want, the more it can cost you per conversion and the worse your net performance.
After all, it seems logical — some conversions are harder to come by than others. For example, you might get incredible through-the-roof performance in terms of click-through or action rates if you restricted a campaign to only remarketing users who’ve visited the advertiser’s homepage in the last 15 minutes, but delivery would be quite dismal. Conversely, a run-of-site campaign may get you great delivery at the expense of performance.
This inverse relationship is a reason why it’s important to consider delivery and inventory quality as well when comparing the performance of any two ad platforms or technologies. Measuring performance in isolation without taking scale into account could result in a few unpleasant surprises.
On the other hand, while there’s alw ays a certain inevitable tradeoff between performance and delivery, the two don’t always have to be a fractious couple. For example, innovations in the field of computational advertising and data science reduce the magnitude of this tradeoff.
In a world rapidly turning towards real-time bidded exchanges (RTBE) and demand side platforms (DSP), access to plentiful inventory and reach is becoming somewhat of a level playing field. If a platform has truly scalable software and hardware architecture, marketers and agencies should be able to examine and bid on several billion impressions a day. If reach is no longer the problem, then it becomes a question of how better technology, algorithms, and data can help improve performance at large volumes.
So, what are the technology elements that can improve performance at scale? The core (but not the only) problem is one of prediction — i.e., identifying users that are likely to respond to the advertiser’s message. The more precisely the technology can discern which user/impression is likely to respond (and what the response rate is), the more accurate the bidding and fewer the wasted impressions. There are a bevy of algorithms, statistical methods, and machine learning techniques that can do this, and some work much better than others.
But it’s not just about predictive algorithms, better audience look-alikes or optimization; it’s as much or even more about efficient learning and the data that powers these algorithms. Without the right data, even the most powerful optimization will fall flat.
Users respond differently in various situational contexts and this is why technology should blend together all the data available about the impression in real-time. For example, a user is in car-buying mode only for a few weeks or months every so many years. A different user may be more amenable to responding to a clothing ad while reading an online fashion magazine than at other times.
The ability to layer behavioral, contextual, demographic, geo, performance, and other data types from different sources in real-time allows for more robust and more discerning decisions on which impressions are valuable for an advertiser and which are not.
In the end, a number of technology and optimization pieces are required to slow down the inevitable drop in performance as your spend goes up. From optimizing budgets better to more efficient campaign management, every piece has its place in the puzzle mat of better performance at scale.
We may still not be able to have our cake and eat it too, but with rapid innovations in display technology, we’re getting a lot better at making every single impression count.