Facebook overrides fact-checks when climate science is “opinion”

  News
image_pdfimage_print
Photograph of busy open-plan office.
Enlarge / Facebook’s election “War Room” on Wednesday, Oct. 17, 2018.

Facebook has touted its fact-checking process as one of the ways it intends to fight rampant disinformation heading into the 2020 US presidential election. New reports about the way the site handles the fact-checking of climate science stories, though, make clear that fact-checking can only work as well as Facebook allows it to—and that the months from now to November are going to be a slog.

Facebook does not employ fact-checkers directly but rather works with a range of third-party organizations to rate how true or false content shared in categories is. The efforts are not universal, however. While Facebook has heavily invested in efforts to stem the overwhelming tide of false and misleading COVID-19 information, for example, it does not heavily fact-check information related to climate change.

The New York Times recently explained the platform’s reasoning behind how it handles climate change. Facebook considers opinion content largely exempt from review—and climate change can, as far as Facebook’s rules are concerned, be a matter of opinion.

Rarely reviewed, however, is different from never. “When someone posts content based on false facts—even if it’s an op-ed or editorial—it is still eligible for fact-checking,” Facebook Communications Director Andy Stone told the NYT. “We’re working to make this clearer in our guidelines so our fact checkers can use their judgment to determine whether it is an attempt to mask false information under the guise of opinion.”

The line has been clear as mud, so far, and Facebook has at least twice overturned the rulings of climate scientists who determine content to be partly or fully false. The first time, a group that partners with Facebook as one of its fact-checkers—Climate Feedback—marked a 2019 Washington Examiner op-ed as false. A climate-change denial organization, the CO2 Coalition, complained to Facebook about the fact-check, and the content warning was then removed.

More recently, an article about climate change published by The Daily Wire, a right-leaning site that generates very high traffic on Facebook, also earned a “partly false” rating from Climate Feedback. The author of the Daily Wire article publicly complained about being “censored,” and Facebook staff reviewed the fact-check. Popular Information obtained internal Facebook documents showing that the Facebook staff agreed with the “partly false” rating. An email thread that alerted high-ranking company executives to the kerfuffle showed that the fact-checking and communications teams apparently wanted to leave it alone, but the policy team said its “stakeholders” thought the fact-check was “biased.” The notice no longer appears on Facebook shares of the Daily Wire story.

Factually intermittent

Facebook and fact-checking have long had a rocky relationship.

The company first established its system of third-party fact-checking in late 2016 in the wake of the half-dozen privacy and misinformation scandals surrounding the lead-up to that year’s presidential election. Late last year, Facebook unveiled a slew of “election integrity” efforts aimed at avoiding the same pitfalls as we head into the 2020 election.

That election is now just over 100 days away, and those efforts appear to have a mixed success rate at best. One of the few categories of political speech where Facebook does promise a bright line is any attempt at voter suppression. That includes advertisements that contain deliberate misinformation about voting, such as including the wrong date (the big day this year is November 3), as well as any ad that suggests “voting is useless or meaningless, or advises people not to vote.” The policy also prohibits ads that “exclude people from political participation on the basis of things like race, ethnicity, or religion.” Ads that say to vote or not to vote for a candidate because of their race or that threaten violence or intimidation fall under that guideline.

Last week, however, ProPublica reported that explicit misinformation about voting remains rampant on the platform. The problem is particularly pronounced with claims about voting by mail, which have increased as the COVID-19 pandemic has made remote voting seem the safer option for tens of millions of Americans.

“Many of these falsehoods appear to violate Facebook’s standards yet have not been taken down or labeled as inaccurate,” ProPublica notes. “Some of them, generalizing from one or two cases, portrayed people of color as the face of voter fraud.”

False claims about voter fraud that come from politicians and candidates for office, including President Donald Trump, have proved particularly thorny for Facebook. While the platform did pull down ads from the Trump campaign that included misleading claims about the census, Facebook has been highly reticent to touch statements from Trump or the Trump campaign, even when they violate Facebook’s published standards.

A group of senators led by Sen. Elizabeth Warren (D-Mass.) last week demanded Facebook explain its inconsistent position on fact-checking.

“Notably, since Facebook issued a blog post on its efforts to combat misinformation on Facebook in April 2017, disinformation campaigns on the platform have continued and expanded, often with state-sponsored support,” the senators wrote. “If Facebook is truly ‘committed to fighting the spread of false news on Facebook and Instagram,’ the company must immediately acknowledge in its fact-checking process that the climate crisis is not a matter of opinion and act to close loopholes that allow climate disinformation to spread on its platform.”

https://arstechnica.com/?p=1692862