This weekend, widespread protests erupted in China in what amounted to “the biggest show of opposition to the ruling Communist Party in decades,” AP News reported. Many protesters attempted to document events live to spread awareness and inspire solidarity across Twitter. Demonstrations were so powerful that Chinese authorities actually seemed to cave, appeasing some of the protesters’ demands by easing the severe lockdown restrictions that sparked the protests.
This could have been a moment that showed how Twitter under Elon Musk is still a relevant breaking-news source, still a place where free speech demonstrations reach the masses, and thus, still the only place to track escalating protests like these. Instead, The Washington Post reported that a flood of “useless tweets” effectively buried live footage from protests. This blocked users from easily following protest news, while Twitter seemingly did nothing to stop what researchers described as an apparent Chinese influence operation.
For hours, these tweets dropped Chinese city names where protests occurred into posts that were mostly advertising pornography and adult escort services. And it worked, preventing users attempting to search city names in Chinese from easily seeing updates on the protests. Researchers told The Post that the tweets were posted from a range of Chinese-language accounts that hadn’t been used for months or even years. The tweets began appearing early Sunday, shortly after protesters started calling for Communist Party leaders to resign.
Examples of tweets can be seen here.
Researchers swiftly took note of the suspected Chinese influence operation very early on Sunday. Some reached out to Twitter directly. Eventually, an outside researcher managed to reach a current Twitter employee, who confirmed that Twitter was working to resolve the issue. However, experts told The Post that Twitter’s solution only seemed to reduce the problem, not completely resolve it. Stanford Internet Observatory Director Alex Stamos told The Post that his team has continued investigating the operation’s reach and impact.
Stamos did not immediately respond to Ars’ request for comment. Twitter reportedly has no communications team.
A former Twitter employee told The Post that what Stamos’ team observed is a common tactic used by authoritarian regimes to reduce access to news. Normally, Twitter’s anti-propaganda team would have manually taken down the accounts, that former employee said. But like many other teams hit by Twitter layoffs, firings, and resignations, that team has been heavily reduced.
“All the China influence operations and analysts at Twitter all resigned,” the former Twitter employee told The Post.
Scrutiny increases on automated content removal
In reducing the content-moderation teams, Musk seems to rely mostly on automated content removal to catch violations that previous staff members had manually monitored. It has become an issue that stretches beyond China. Also this weekend, French regulators said they’ve become dubious about Twitter capably stopping misinformation spread, and the New Zealand government had to intervene and contact Twitter directly when Twitter failed to detect banned footage of the Christchurch, New Zealand, terror attack.
A spokesperson for New Zealand Prime Minister Jacinda Ardern told The Guardian that “Twitter’s automated reporting function didn’t pick up the content as harmful.” Apparently, the entire Twitter team that New Zealand had expected to partner with in blocking such extremism-related content was laid off.
Now Ardern’s office says “only time will tell” if Twitter is truly committed to removing harmful content, and other governments globally seem to agree. Just today, French communications regulatory agency Arcom told Reuters that “Twitter has shown a lack of transparency in the fight against misinformation,” releasing a report that specifically calls out how “imprecise” the company was about how its automated tools combat misinformation.
According to European Union data reviewed by AP, Twitter had already become more sluggish with removing hate speech and misinformation throughout the past year, even before Musk took over. But it’s Musk who will have to answer to governments scrambling to ensure that Twitter’s content moderation will actually work to prevent extremism and disinformation campaigns from spreading online and causing real harm.
By mid-2023, Musk will feel greater pressure to respond to countries’ concerns in the EU, where stricter rules protecting online safety will soon be enacted. If he doesn’t, he risks fines as high as 6 percent of Twitter’s global revenue, AP reported.
For now, though, Musk is basically doing the opposite of what online safety experts want, according to AP. As Musk grants “amnesty” to suspended Twitter accounts, experts told AP that they predict misinformation and hate speech will only rise on the platform.
These experts included members of Twitter’s Trust and Safety Council, who confirmed that the group has not met since Musk took over and seems unsure whether a scheduled meeting for mid-December will occur. So far when making decisions on bringing back suspended accounts, Musk seems to prefer Twitter polls to trusting expert opinions. One council member, University of Virginia cyber civil rights expert Danielle Citron, told AP that “the whole point of the permanent suspension is because these people were so bad they were bad for the business.”
Ars couldn’t immediately reach Citron for comment, but she told AP that—like rumors that Twitter could break at any moment—Musk granting amnesty to suspended accounts is yet another “disaster waiting to happen.”
https://arstechnica.com/?p=1900332