With C-suite leaders from iconic brands keynoting sessions, leading workshops and attending networking events, Brandweek is the place to be for marketing innovation and problem-solving. Register to attend September 23–26 in Phoenix, Arizona.
It’s been a strange few days for brand safety, from the initial news of Elon Musk’s X lawsuit against the Global Alliance for Responsible Media to the shock that GARM would be shutting down.
What’s surprising is not the fact that it had happened—as a podcast advertising leader, I’ve seen firsthand how the org’s approach to brand safety has served neither advertisers, publishers, creators, nor the medium itself. However, as much as the ad industry needs to find a new path forward, I never expected the collapse to happen so suddenly, or for the reasons stated in the press.
After years of being the default authority for brand safety online, GARM had recently been criticized by right-wing groups for supposedly silencing conservative voices. While I’ve been vocal that GARM is an imperfect mechanism for promoting brand safety, the arguments that conservative groups are making don’t add up.
The GARM framework failed consistently in application, with brand safety tools like Barometer and Sounder producing outputs that were destructive for content on all sides of the political spectrum. This led to shows on the left and right being inappropriately graded, costing both sides vital advertising dollars.
The sad truth is that GARM’s shutdown is the right thing, but possibly not for the right reasons. However, it does present an opportunity to build a new approach to brand safety. First, though, we need to truly understand the nature of GARM’s shortcomings, divorced from the political spectacle surrounding it.
Why everyone loses
Despite the protestations of some conservatives, I trust the intentions of the World Federation of Advertisers (WFA) and the individuals who worked on GARM.
It’s not that its standards are somehow biased, but they are so impractical that they hurt everyone—right, left, and center. Too often, GARM asked the wrong questions, monitoring highly nuanced concepts that are easily misunderstood even with advancements in AI.
Then, their unreliable data is treated as fact without anyone looking under the hood to see if the outputs work. For example, if you use any of the major brand safety measurement tools and compare a podcast’s rating to the transcripts that informed that rating, you’ll see how the GARM system was error-prone.