Even though Twitter’s terms of service explicitly ban posts glorifying self-harm and media depicting “visible wounds,” independent researchers report that Twitter too often seemingly looks the other way regarding self-harm. Researchers from the Network Contagion Research Institute (NCRI) estimate there are “certainly” thousands, and possibly “hundreds of thousands,” of users regularly violating these terms without any enforcement by Twitter. The result of Twitter’s alleged inaction: Since October, posts using self-harm hashtags have seen “prolific growth.”
According to reports, Twitter was publicly alerted to issues with self-harm content moderation as early as last October. That’s when a UK charity dedicated to children’s digital rights, 5Rights, reported to a UK regulator that there was a major problem with Twitter’s algorithmic recommendation system. 5Rights’ research found that Twitter’s algorithm “was steering accounts with child-aged avatars searching the words’ self-harm’ to Twitter users who were sharing photographs and videos of cutting themselves.”
In October, Twitter told Financial Times that “It is against the Twitter rules to promote, glorify, or encourage suicide and self-harm. Our number-one priority is the safety of the people who use our service. If tweets are in violation of our rules on suicide and self-harm and glorification of violence, we take decisive and appropriate enforcement action.”
NCRI’s report shows a collection of tweets that provide graphic evidence showing that any steps Twitter took since last fall has done little to combat the problem. Even users with small followings seem to get broader exposure for posts promoting self-harm, their research showed. Pointing to just one hashtag “#shtwt” (which stands for “self-harm Twitter”), researchers discovered that users with the hashtag in their bios have doubled since last fall. Also “monthly mentions of ‘shtwt’ and associated self-harm terms have increased by over 500 percent since Twitter was first publicly alerted to the issue.” Where in October, there were fewer than 4,000 monthly tweets with the hashtag, they report the soaring trend peaked in July 2022 with “close to 30,000.”
A Twitter spokesperson provided a more recent statement to Ars that largely echoes what the company said last October. The company still considers it “against the Twitter rules to promote, glorify, or encourage self-harm” and says that “the safety of the people who use our service is our priority, and we are committed to building a safer Internet and improving the health of the public conversation.”
What could Twitter do to stop self-harm content from spreading?
NCRI researchers say that there are several reasons why Twitter fails to moderate self-harm content.
For one thing, users are purposefully evasive. They rely on coded language in tweets (like “armgills” or “cat scratches”) that Twitter may not be aware of, and sometimes users claim that graphic images depicting wounds show fake blood. Both are seemingly successful tactics to avoid content removal.
For another, researchers say that Twitter is more consumed with moderating political content where upset users are more hostile within communities and are likely to report content directing harm to others. There isn’t that level of ire between community members glorifying self-harm, they say. Researchers suggest that because “members of the self-harm community are not hostile and celebrate” each other, “rather than deride one another,” the community doesn’t monitor itself in the same way. Thus it becomes necessary for Twitter to be more proactive in finding and flagging content, it says, because self-harm community “members are unlikely to denounce or report one another to Twitter,” and, therefore, can more easily go unnoticed.
To improve self-harm content moderation and reported problems with its algorithm, Twitter now uses “other tools” the company has “deployed around this type of content,” which includes “not allowing known associated terms to appear in the top of search or in search typeahead.” Twitter also works with organizations to redirect people searching self-harm terms in more than 30 countries to “reliable resources using #ThereisHelp prompts.”
Describing Twitter as an “ongoing, potent accelerator of serious disorders,” the NCRI report warns that “if these networks continue to grow unabated on Twitter, so will the risk of increasingly severe or even fatal injuries.” A Rutgers University psychology professor who helped author the report, Lee Jussim, told The Washington Post that based on the surge his team witnessed within the past year alone, “It smells to me like social media contagion.”
Jussim and 5Rights couldn’t immediately be reached by Ars for additional comment.
https://arstechnica.com/?p=1876955