More than 17,000 YouTube channels removed since new hateful content policy implemented

  News, Rassegna Stampa
image_pdfimage_print

YouTube’s teams have removed more than 100,000 videos and 17,000 channels since the company implemented changes to its hateful content policies in June.

Those numbers are approximately five times as many than the company’s last quarter, according to a new blog post from YouTube about the company’s attempts to tackle a growing number of hateful and dangerous videos on the platform. This also includes doubling the removal of comments (more than 500 million) that were found to be hateful. Some of these channels, videos, and comments are old and were terminated following the policy change, according to the blog post. This could account for the spike in removal numbers.

YouTube relies mostly on machine learning tools to help catch hateful videos before they’re widely available online. Approximately “80 percent of those auto-flagged videos were removed before they received a single view in the second quarter of 2019,” the blog post reads.

“We’ve been removing harmful content since YouTube was started, but our investment in this work has accelerated in recent years,” the blog post reads. “Over the last 18 months we’ve reduced views on videos that are later removed for violating our policies by 80 percent, and we’re continuously working to reduce this number further.”

Still, it’s difficult to assuage what those numbers amount to without proper context of scope. More than five hundred hours of video are uploaded to YouTube every single minute. There were more than 23 million YouTube channels in 2018, according to analytics firm, Social Blade. YouTube also has nearly two billion monthly logged in users. Gross numbers and a few percentage points without the additional context doesn’t paint a clear picture of how much of the problem YouTube is able to tackle right now.

The blog post does demonstrate how long YouTube’s policy and product teams have tried to fight hateful activity. A new timeline produced by YouTube, seen below, shows various efforts YouTube’s teams have taken to try and combat harmful, hateful, and disturbing content. It goes back to November 2016, when disturbing videos targeted toward children were found by journalists.

Late 2016 and early 2017 is often viewed as one of YouTube’s earliest tumultuous periods in its struggle to contain, fight, and prevent dangerous videos. Early 2017 also saw reports from journalists pointing out that YouTube had become a hotbed for terrorist content. Between then and now, YouTube has faced global public scrutiny for allowing videos that are harmful to society to live on the platform.

Part of YouTube’s approach to tackling this growing issue is using an Intelligence Desk to keep an eye on what other people are seeing. The desk launched in January 2018 — the same month that vlogger Logan Paul uploaded a video of a dead man’s body that was seen millions of time before it was removed. The team “monitors the news, social media and user reports in order to detect new trends surrounding inappropriate content.”

“We’re determined to continue reducing exposure to videos that violate our policies,” the blog post reads.

That also includes updating policies when need be. YouTube’s policy team is currently working on updating its harassment policy, including and most notably to address creator-on-creator harassment. A popular topic amongst YouTubers, creator-on-creator harassment became a much bigger conversation in June when Vox personality Carlos Maza detailed conservative pundit Steven Crowder’s use of homophobic language when talking about Maza. YouTube revoked Crowder’s ad privileges, but his channel remained up. CEO Susan Wojcicki addressed the controversy at Recode’s CodeCon, and reiterated that although the company didn’t agree with Crowder’s language, the videos were deemed acceptable.

(Disclosure: Vox and Recode are publications of Vox Media, which also owns The Verge.)

Removing harmful videos is just one step that YouTube takes to fight problematic content on its platform. The company is set to release more information in the coming months about three other steps — reducing the spread of content, raising more authoritative voices, and rewarding positive creators through ad privileges.

https://www.theverge.com/2019/9/3/20845071/youtube-hateful-content-policies-channels-comments-videos-susan-wojcicki