Data scientists run very similar (though more statistically significant) analyses when determining how best to match the right mobile ad to the right user while wasting the smallest number of ad impressions.
A statistical analysis approach commonly used in digital marketing is that of the “multi-armed bandit”. In this setting, one is hypothetically facing multiple slot machines (“one-armed bandits”) and the objective is to determine which slot machine(s) will deliver the greatest return on investments (sum of the avalanches of coins minus the sum of coins inserted into the machine).
Let’s say you’re at a casino and you’re facing two slot machines. Slot Machine A generated an avalanche of quarters 10 out of 100 tries while Slot Machine B generated an avalanche of quarters one out of 12 times. On which machine should you bet?
Intuitively, one would think that the slot machine which generated a return 10 in 100 times (10%) would be better than a slot machine with a one in 12 return (8.33%). But because we only tried Slot Machine B 12 times, there is greater opportunity for improvement than with Slot Machine A. According to statistical theory, we perform best when we are carefully optimistic.
In mobile marketing, data scientists are constantly facing “multi-armed bandits”, with each slot machine representing an ad whose lever may or may not pulled by the targeted ad audience. However, the mechanism of ad-pulling, namely, choosing which ad to display to a specific member of the audience, is done automatically by sophisticated machine learning algorithms that simultaneously learn the improvement potential of each ad, all while utilizing the current knowledge to its fullest potential.
To make our “bandit” inspired ad-serving more efficient, our decision-making mechanism for the current targeted user is assisted by previous knowledge of similar, albeit not identical, ad serving transactions. For example, if we have tested ads for other productivity apps on one publisher, we can make assumptions which should reduce our testing for a new productivity app. We can make similar inferences based on similar-performing publishers, similar-performing geographies, similar-performing cohorts, similar-performing ad categories, etc.
The more complex our ad-serving dilemma gets, the better we’re being served by the “multi armed bandit” approach. And as we acquire more and more data by analyzing performance, we’re able to easily discriminate between effective “arms” and those delivering poor performance, ultimately focusing on what improves ROI and campaign performance.
Using Ad Tech
So, unlike your first trip to Las Vegas, data scientists have extensive ad tech data to enable decision making. Though guessing on which slot machine to place that first quarter was probably a random act, ad tech data scientists are able to make assumptions based on their experience / past performance. And as they run more “multi-armed bandit” testing, they correct their assumptions and improve campaign performance.
I hope I have provided you with examples of statistical modeling which will help maximize your winnings the next time you visit Las Vegas. But you don’t need to wait until you visit Sin City. Just ask your data scientist (or a mobile ad tech vendor’s data scientist) to take you through a few thousand decisions that the technology makes every minute and start betting on which user will respond to which ad. You’ll have new appreciation for the sophistication of ad tech modeling in 2017.