Musk calling shots on X content explains advertiser exodus, former exec says

Musk calling shots on X content explains advertiser exodus, former exec says

Recently, Elon Musk has made it clear that he blames advocates speaking out about hate speech for plummeting ad revenue on X, the platform formerly known as Twitter. But today, Reuters published an exclusive interview with a former Twitter ad exec, AJ Brown, who seemed to push back on Musk’s narrative.

The platform’s former head of brand safety and ad quality suggested that X advertisers aren’t just pulling back as a knee-jerk reaction to critics’ claims that the platform has become increasingly toxic under Musk. Brown told Reuters that X shifting its content moderation policy to “limiting reach” of offensive content—rather than removing the content—”made it challenging to convince brands” that Musk’s social media platform “was safe for ads.”

“Helping people wrap their minds around the concept that violating a policy would no longer result in the removal of whatever was violating the policy, was a difficult message to communicate to people,” Brown told Reuters.

This suggests that for some advertisers, it’s not necessarily the amount of hate speech that’s the problem, but is instead even the remote possibility that their brand’s ads could be displayed next to offensive content that has them pausing ad campaigns. To brands, it’s seemingly all about what X is doing to ensure brand safety, and for major brands, the answer to that question at various times has been: not enough.

To anyone closely following Musk’s reign at Twitter/X, these insights that Brown shared with Reuters is not news but rather a reminder of what advertisers have been saying since Musk took over the platform.

In November, when 14 of then-Twitter’s top advertisers had left the platform, one of those brands, Mars, released a statement that the company suspended ads because of “significant brand safety and suitability incidents.” An advertiser-led group, the Global Alliance for Responsible Media, warned Musk that “brand safety is non-negotiable for advertisers,” and though X has reported that 99 percent of content views are of “healthy” posts, it’s obvious that limiting reach isn’t a perfect strategy. Just last month, X suspended a pro-Hitler account when two brands suspended advertising after Media Matters shared screenshots of their ads being displayed on the violative account’s feed.

After last month’s incident, X announced that the platform was adding more brand safety controls to assuage advertisers’ fears. Now advertisers can block ads from appearing next to posts containing certain keywords. They can also adjust “sensitivity settings” to mark their accounts as ‘conservative’ and avoid ad placements next to “targeted hate speech, explicit sexual content, gratuitous gore, excessive profanity, obscenity, spam, drugs.”

“Your ads will only air next to content that is appropriate for you,” Yaccarino told CNBC.

For six years, it was Brown’s job to make sure that nobody’s ads appeared next to unsuitable content, Brown told Reuters. He said as soon as Musk came on board, advertisers started voicing concerns, “fearful” of ads appearing next to harmful posts. When Musk changed the content moderation policy to “Freedom of Speech, Not Reach” last April, Brown said that only raised more questions for advertisers who had already paused spending and weren’t convinced that the policy change was a positive development.

Media Matters Vice President Julie Millican told Ars that the nonprofit watchdog group will continue to monitor X to document instances of ads being displayed next to harmful posts.

https://arstechnica.com/?p=1966250