
Whether Twitter co-founder and CEO Jack Dorsey intended to or not, he and his company set new standards this year for how social media platforms moderate hate speech and misinformation.
The timing was crucial, given that 2020 has been anything but normal. And in this presidential election year, with an international health crisis and social unrest thrown in for good measure, the conversation has played out online, with sometimes hateful and misleading or false rhetoric, in ways that have real-life consequences.
Dorsey—Adweek’s Digital Executive of the Year—has made a point not to stand for it. His tough decisions have already put immense pressure on Twitter’s peers to clean up the misinformation and hate speech on their platforms—even when it comes from the president of the United States.
As Dorsey recently explained, he and his company intend to learn from their past mistakes. “It would be silly for us not to change Twitter,” he told The New York Times in August. To him, the 14-year-old company “should become irrelevant if it doesn’t change, if it doesn’t constantly evolve and if it doesn’t recognize gaps and opportunities to get better.”
Twitter’s decisions in June to fact-check and limit President Trump’s tweets set in motion an avalanche of activity in the social media industry. When Trump tweeted “when the looting starts, the shooting starts,” a direct threat to shoot protesters in Minneapolis in the aftermath of George Floyd’s death while in police custody, Twitter restricted that tweet, removed it from its algorithmic recommendations and added an interstitial so users would be warned before clicking on it.
The platform’s actions at that time stood in stark contrast to those of Facebook, which left the same post up, claiming it didn’t violate its rules about inciting violence.
But other social platforms quickly followed suit. Snap removed Trump from its Discover hub of recommended accounts, Twitch temporarily suspended Trump for streaming old campaign rallies and Reddit, a few weeks later, took decisive action to ban hate speech sitewide and permanently boot r/The_Donald, a hotbed of pro-Trump hate speech on the platform.
Twitter’s stance also resonated with Facebook’s own employees who rebelled, staged a virtual walkout and quit in protest. Advertisers spoke up, too: Under the direction of civil rights groups like the NAACP and the Anti-Defamation League, 1,000 companies decided to pause advertising on Facebook and Instagram to protest their policies and enforcement of hate speech on the platforms. The boycott didn’t dent Facebook’s bottom line, but it led the company to release a long-delayed civil rights audit, take tougher action on QAnon and militia groups, and introduce a Twitter-like interstitial for rule-breaking content from world leaders.
These changes have happened as Twitter usage has exploded during the pandemic. During its last earnings call, Twitter reported 186 million daily users who can view ads (20 million more than the previous quarter). That’s a 34% increase over 2019 and the “highest quarterly year-over-year growth rate we’ve delivered,” Dorsey said at the time. (Revenue, however, fell 23%, due to the same ad revenue losses that have plagued much of the industry during the pandemic.)
Twitter still faces challenges in moderating its feeds. It’s still a user-generated platform where racists and trolls can post in seconds. The platform also suffered its worst data breach in company history this summer. But in the world of content moderation, Dorsey has chosen a particular path in stark opposition to that of Facebook CEO Mark Zuckerberg, who maintains that Facebook should not be the “arbiters of truth.”
https://www.adweek.com/tv-video/twitter-ceo-jack-dorsey-leads-the-charge-against-lies-and-hate-speech-on-social-media/