“LeBron James Executed Perfectly Tonight on Defense.” Do you see anything wrong with this headline? Probably not. Then why are articles like these repeatedly flagged by ad-verification platforms as “unsafe content?”
First-Gen Ad Verification Struggles to Enforce Brand Safety
The reason this article would be flagged is the presence of one single word, which could be interpreted as negative – in this case, the word “executed.” A human reader could easily tell the word’s contextual use and consider the article to be “brand-safe.” But many first-gen verification solutions are unable to interpret context. Instead of building advanced NLP (Natural Language Processing) capabilities into their platforms, they rely on “dumb” keyword lists to categorize content. If a potentially negative word (like “executed”) appears in an otherwise safe article, these systems have no way to deal with this problem. Instead, they simply flag the article as negative, and a perfectly safe piece of content is blacklisted.
Publishers and Advertisers are severely compromised
A common misperception is that publishers are unable to monetize safe content because it’s being flagged as negative and then blocked; that publishers are getting dealt a bad hand (losing valuable revenue, unjustly) while the advertisers, erring on the side of caution, are safe and sound.
We respectfully disagree with this mischaracterization of the state of brand safety. There is no “win-lose” situation here, but rather a “lose-lose” situation, where both publisher and advertiser are sustaining heavy damage. Just as first-gen verification struggles to discern safe content from unsafe content, it is unable to protect brands from appearing where they shouldn’t. Leading advertisers find their ads served alongside violent, graphic and offensive content on a daily basis and at huge scale. Ultimately, the situation today is absurd – publishers are mad for having safe inventory flagged as unsafe, and advertisers are mad for having their brand appear alongside horrific content.
Why is this happening? It’s the tech, stupid
While the issue of brand-safety has grown in importance, the tech used to solve the problem has lagged behind. First-gen verification is based on simplistic tech, which struggles to analyze and understand language, doesn’t operate autonomously at scale, doesn’t perform in real time and only looks at a sample of the impressions being served. Brand-safety cannot be enforced using this kind of primitive tech.
The industry must adopt autonomous, AI-driven brand safety
Great progress has been made in AI, NLP, and machine learning over the past few years. Next-level tech is being developed and utilized in many different verticals, from smart mobility to cybersecurity and e-commerce. The verification industry must lead the way in developing machine-learning algorithms for fraud prevention and NLP for brand safety. Instead, it dwells on dated keyword lists and simplistic filtration methodologies. So, how do we move forward?
We’ve mapped out four criteria, which should be met by any player looking to solve brand safety for both advertisers and publishers. The first is AI driven decision making. No more relying on a dumb list of negative keywords – every piece of content should be analyzed by advanced NLP modules able to infer context and make smart decisions.
Second – the entire industry must evolve from measuring damage and optimizing accordingly, to automatically preventing the damage. There is NO point in measuring brand safety violations after-the-fact. We need fully autonomous brand safety which can block out the bad stuff, rather than rely on the manual report, analyze and optimize workflow that is prevalent today.
Third – the entire decision-making process must happen in real time. If your verification vendor is not performing in real time, then they are useless. Only talented engineers can streamline the analysis process and help the systems make smart, on-the-spot decisions before the ad is actually served. Today’s incumbent solutions are just too slow for real-time prevention.
Finally – brand safety should be examined across every single impression, rather than just a small sample of the traffic, as is the best practice among first-gen verification vendors today. Blacklisting, cataloging and indexing inventory based on probabilistic assumptions just isn’t good enough and will never provide the real-time and comprehensive coverage that’s needed to protect the advertiser.
Over-filtering and under-protecting will only be solved by next-gen tech
Publishers see the damage from their perspective, and advertisers are feeling exposed and compromised. The issue won’t go away by proposing new strategies, adding more guidelines, or forming more brand-safety consortiums. First-gen practices will not stop until the industry realizes that it’s all about the product and the tech, and that any solution aiming to move the needle must be structured around the most advanced AI available.