Facebook confirms its “standards” don’t apply to politicians

  News
image_pdfimage_print
President Donald Trump discusses Twitter, Facebook, and Google during a Social Media Summit at the White House in July 2019.
Enlarge / President Donald Trump discusses Twitter, Facebook, and Google during a Social Media Summit at the White House in July 2019.

Facebook this week finally put into writing what users—especially politically powerful users—have known for years: its community “standards” do not, in fact, apply across the whole community. Speech from politicians is officially exempt from the platform’s fact checking and decency standards, the company has clarified, with a few exceptions.

Facebook communications VP Nick Clegg, himself a former member of the UK Parliament, outlined the policy in a speech and company blog post Tuesday.

Facebook has had a “newsworthiness exemption” to its content guidelines since 2016. That policy was formalized in late October of that year amid a contentious and chaotic US political season and three weeks before the presidential election that would land Donald Trump the White House.

Facebook at the time was uncertain how to handle posts from the Trump campaign, The Wall Street Journal reported. Sources told the paper that Facebook employees were sharply divided over the candidate’s rhetoric about Muslim immigrants and his stated desire for a Muslim travel ban, which several felt were in violation of the service’s hate speech standards. Eventually, the sources said, CEO Mark Zuckerberg weighed in directly and said it would be inappropriate to intervene. Months later, Facebook finally issued its policy.

“We’re going to begin allowing more items that people find newsworthy, significant, or important to the public interest—even if they might otherwise violate our standards,” Facebook wrote at the time.

Clegg’s update says that Facebook by default “will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” Nor will it be subject to fact-checking, as the company does not believe that it is appropriate for it to “referee political debates” or prevent a polician’s speech from both reaching its intended audience and “being subject to public debate and scrutiny.”

Newsworthiness, he added, will be determined by weighing the “public interest value of the piece of speech” against the risk of harm. As for what scale the company uses to quantify this delicate balance, Clegg added:

When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.

The exception to all of this is advertising, Clegg added. Standards are different for content for which the company receives payment, so if someone—even a politician or political candidate—posts ads to Facebook, those ads in theory must still meet both the community standards and Facebook’s advertising policies.

Real consequences

Facebook does have a daunting and perhaps insurmountable task in front of it in trying to uphold global community standards across 2.4 billion users in 24 time zones and using more than 100 languages. But its attempts in the past to walk that thin and dotted line, particularly when it comes to hate speech, have had significant and occasionally fatal consequences in recent years.

The company’s guidelines say that hate speech is not allowed on the platform “because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence,” which experts generally agree is true.

Internationally, rampant, unchecked hate speech on Facebook has had devastating consequences. Facebook has been heavily implicated in the systematic persecution of hundreds of thousands of Rohingya Muslims in Myanmar, who are still under threat of genocide, according to the UN.

The company issued a report in November 2018 acknowledging its role in the crisis. “Prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence,” the company admitted. “We agree that we can and should do more.”

The line from hate speech to dire consequence, however, may not always be as transparent as the company’s review of content shared in Myanmar found, and “content that has the potential to incite violence,” when shared by a prominent individual, is a big bucket.

In the US, President Trump, for example, routinely uses language that explicitly dehumanizes immigrants and speaks of an “invasion” at the southern US border. In August, a gunman who killed 22 and wounded dozens more at a Walmart in El Paso, Texas shared a so-called manifesto online describing his motivations for killing Mexicans, immigrants, and Latinx people. That document strongly echoed rhetoric used by Trump on multiple occasions, both online and in person.

Human rights experts at the UN recently condemned the kind of hate speech used by the president and other right-wing leaders worldwide, writing that “leaders, senior government officials, politicians and other prominent figures [are] spreading fear among the public against migrants or those seen as ‘the others’ for their own political gain” and has led to increased violence.

https://arstechnica.com/?p=1574341