Facebook is giving some power back to its users — but very slowly
First of all, I want to wish everyone a very happy and healthy Data Privacy Day. If you’re looking for a fun way to celebrate, I suggest making a list of all the steps you’ve taken to protect your privacy, and then remember that telecom companies were selling your real-time location to anyone who could afford it until last week.
Next up, two stories about Facebook and power.
The first concerns Facebook’s plan to build a kind of independent Supreme Court for content moderation. It’s an idea that Mark Zuckerberg floated last April in a conversation with Ezra Klein, and that Facebook formally committed to in November.
Today Nick Clegg, the company’s new head of policy and communications, announced Facebook’s next steps toward building what it is now calling an “oversight board” — and published its draft charter:
As we build out the board we want to make sure it is able to render independent judgment, is transparent and respects privacy. After initial consultation and deliberation, we’ve proposed a basic scope and structure that’s outlined in this draft charter. We’ve also identified key decisions that still need to be made, like the number of members, length of terms and how cases are selected.
We’ll look to answer these questions over the next six months in a series of workshops around the world where we will convene experts and organizations who work on a range of issues such as free expression, technology and democracy, procedural fairness and human rights. We’ll host these workshops in Singapore, Delhi, Nairobi, Berlin, New York, Mexico City and many more cities — soliciting feedback on how best to design a board that upholds our principles and brings independent judgment to hard cases.
I like the idea of the board, which promises to devolve power over speech and content moderation to a more diverse group of subject matter experts, who are less tethered to the politics of one country or the financial interests of the platform. As I wrote last summer:
What Facebook is describing with these ideas is something like a system of justice — and there are very few things it is working on that I find more fascinating. For all the reasons laid out by Radiolab, a perfect content moderation regime likely is too much to hope for. But Facebook could build and support institutions that help it balance competing notions of free speech and a safe community. Ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.
The draft charter offers useful new details on how this will all work. The company plans to put 40 people on the board, which has seemed both too small and too big to me today depending on which way I look at it. Cases will be decided by smaller panels of board members, who will choose the cases upon which they wish to deliberate based on their interests. They will publish their opinions, but not their individual votes. And they will be paid.
Facebook will pick the first group, and board members will serve three-year terms. Afterward, each outgoing member will choose their own successor. Current and former Facebook employees are prohibited from joining the board, as are government officials.
At Wired, Issie Lapowsky likes the general idea but worries that the sheer size of Facebook will make the system described in the draft charter unworkable:
No team, no matter the size or scope, could ever adequately consider every viewpoint represented on Facebook. After all, arguably Facebook’s biggest problem when it comes to content moderation decisions is not how it’s making the decisions or who’s making them, but just how many decisions there are to make on a platform of its size.
In seeking to fix one unprecedented problem, Facebook has proposed an unprecedented, and perhaps impossible, solution. No, the decisions Facebook’s supreme court makes won’t dictate who’s allowed to get married or whether schools ought to be integrated. But they will shape the definition of acceptable discourse on the world’s largest social network.
I suspect that we will have much more to argue about as the oversight board takes shape, and considers its first cases. But so much Facebook criticism starts from the observation that the company has unprecedented size and power — and so to see it devolve power back to its own community, even in a limited way, feels worthy of encouragement.
Particularly so, given that the day’s other big story involves a consolidation of power. ProPublica and several other outlets have built tools that allow reporters, with the consent of users who install their tools, to collect information about which ads are being targeted at them. But as Jeremy B. Merrill and Ariana Tobin reported today, the tools stopped working this month — because Facebook blocked the tools’ ability to pull in data regarding ad targeting.
It’s part of a long-running game of cat and mouse between Facebook and journalists, in which journalists build tools for scraping data from Facebook, and Facebook tells them to knock it off. Merrill and Tobin argue the data they were collecting made research possible that Facebook’s own tools do not:
The latest move comes a few months after Facebook executives urged ProPublica to shut down its ad transparency project. In August, Facebook ads product management director Rob Leathern acknowledged ProPublica’s project “serves an important purpose.” But he said, “We’re going to start enforcing on the existing terms of service that we have.” He said Facebook would soon “transition” ProPublica away from its tool.
Facebook has launched an archive of American political ads, which the company says is an alternative to ProPublica’s tool. However, Facebook’s ad archive is only available in three countries, fails to disclose important targeting data and doesn’t even include all political ads run in the U.S.
Facebook’s former chief security officer, Alex Stamos, argued on Twitter that Facebook’s move was best understood as a defensive move against ad blockers, rather than an offensive move against journalism. Facebook has pledged to build a separate API for its ad archive that might enable more of the kind of research that ProPublica has been doing, but the API has been slow in coming.
And as Stamos notes, there’s good reason for that. Every API opens a company up to some level of risk — the Cambridge Analytica scandal being the canonical example. And to its credit, as I’ve noted before, Facebook has continued to make good-faith efforts to make some data available to researchers — never as much as they would like, but more than you might expect.
The most ambitious such effort is called Social Science One. It’s a partnership between researchers and the private sector that seeks to use platform data to do social science. Its first project with Facebook, announced last April, will examine the relationship between social networks and democracy.
But as Robbie Gonzalez noted in Wired last week, the project has been very slow going:
Make no mistake: Getting SSO off the ground was—and continues to be—a royal pain, what with all the legal paperwork, privacy concerns, and ethical considerations at play. Details of the industry-academic partnership are too complex to relate here (though I’ve written about them previously), but suffice it to say that King and his SSO cofounder, Stanford legal professor Nathan Persily, earlier this month published a 2,300-word update on the status of their initiative, more than half of which is devoted to the ongoing challenges they face in bringing it to fruition. “Complicating matters,” they write, “is the obvious fact that almost every part of our project has never even been attempted before.” […]
But if all goes well, SSO could have a more lasting impact, by setting up a framework for secure, ethical, independent research within the tech giants. There’s no reason future investigations, funded and overseen by SSO or a similar outfit, can’t grapple with big questions on well-being. They should also involve companies other than Facebook. We not only want to know what a vulnerable individual watches on YouTube, we also want to know what’s happening when they go to Reddit, what questions they ask their Alexa or Google Home, or how they feel when they post on Instagram. We need these companies to open their doors, and their datastreams, in a prescribed way that respects every participant in the process.
Over time, I hope that efforts like SSO become easier for legitimate social scientists to undertake, and that similar tools become available to journalists. Data-scraping Chrome extensions have resulted in some excellent journalism. But it’s also true that they are exploiting vulnerabilities in Facebook’s code that can and are used for bad purposes.
I wish ProPublica and others could continue doing their journalism while a better system is worked out. But that’s the thing about power: those who have it tend to give it up slowly. And when they do, it’s almost always on their terms.
Democracy
Facebook Opens New Fronts to Combat Political Interference
Facebook announced a new effort to fend off foreign interference in the European Union’s parliamentary election campaign this spring, Sam Schechner and Kimberly Chin report:
The company said Monday that it will take steps to guard against the spread of fake news and misinformation on its platform in coming elections, including expanding the reach of a searchable database of political ads.
The new tools—similar to those it applied in the run-up to last year’s midterm election in the U.S.—will be available next month in India, before expanding to the EU in March ahead of the bloc’s hotly contested parliamentary election spread across over two dozen countries and languages in May.
Should real-time ad bidding be illegal under GDPR because it involves giving groups of people sensitive labels (“mental illness,” “sexually transmitted diseases”) and then auctioning them off to the highest bidder? That’s the argument in a new complaint, Natasha Lomas reports. Worth watching — advertising auctions are the financial engine of the entire internet.
It argues the personalized ad industry has “spawned a mass data broadcast mechanism” which gathers “a wide range of information on individuals going well beyond the information required to provide the relevant adverts”; and also that it “provides that information to a host of third parties for a range of uses that go well beyond the purposes which a data subject can understand, or consent or object to”.
“There is no legal justification for such pervasive and invasive profiling and processing of personal data for profit,” the complaint asserts.
Google Memo on Cost Cuts Sparks Heated Debate Inside Company
Add this to the list of things that Google employees are unhappy about: a 2016 document proposing cost cuts, which was shared internally last week. Mark Bergen and Alistair Barr:
The ideas were in a 2016 slide deck drafted by the company’s human resources department from a brainstorming session. The document, portions of which were read to Bloomberg News, was circulated in recent days by employees via Google’s internal communications systems. It detailed proposed changes to employee compensation, benefits and perks.
The document also discussed how the proposals could be best presented to employees to minimize frustration, according to one of the people. That caused the most anger among some staff after the document was circulated, said this person. Google declined to comment.
GDPR makes it easier to get your data, but doesn’t mean you’ll understand it
Jon Porter uses GDPR’s “Right of Access” to see what you actually get when you request your data:
It was easy to download my data in the first place. Both Google and Apple’s data download services let you pick and choose what data you want to download. Facebook doesn’t, but all three are easy to find on their respective websites, and it arrives quickly. Meanwhile, rather than presenting it as an easy option to find on its site, getting a single link with all of your Amazon data relies on you digging through the site’s “Contact Us” page to find the option hidden at the end of the list. Once I requested it, it took the full 30 days to receive a link to download my data (the limit imposed by the regulation).
When it actually came time to look at the data I’d received, however, things got messy. Some files were ambiguously labeled, while others were stored in formats that tested the limits of what constitutes “commonly used.” Actually working out what data I was looking at wasn’t nearly as simple as it should be.
How Volunteers for India’s Ruling Party Are Using WhatsApp to Fuel Fake News Ahead of Elections
Billy Perrigo reports that a move to limit the number of times a WhatsApp message can be forwarded has not effectively stopped the spread of misinformation in India. And Indian political parties are creating “hundreds of thousands” of WhatsApp group chats to spread political messages, led by Prime Minister Narendra Modi’s ruling Bharatiya Janata Party (BJP):
The strategy reflects a fundamental change in Indian society: at the time of the last national polls in 2014, just 21% of Indians owned a smartphone; by 2019, that figure is thought to have nearly doubled to 39%. And for most of them, WhatsApp is the social media app of choice — by one count, more than 90% of smartphone users have it installed. In recognition of that shift, the BJP’s social media chief declared 2019 the year of India’s first “WhatsApp elections.”
But according to researchers, as well as screenshots of group chats from as recently as January seen by TIME, these WhatsApp group chats frequently contain and disseminate false information and hateful rhetoric, much of which comes from forwarded messages. Experts say the Hindu nationalist BJP is fueling this trend, although opposition parties are using the same tactics.
Elsewhere
How Facebook Trains Content Moderators to Put Out ‘PR Fires’ During Elections
Joseph Cox explores how Facebook instructs moderators to flag content that could generate a public-relations crisis:
Internal Facebook documents obtained by Motherboard show that beyond protecting democracy, there’s a second, clearly stated reason that Facebook is interested in hardening its platform: protecting its public image. Facebook specifically instructs its content moderators to look out for posts that could cause “PR fires” around “hi-risk events” in the lead-up to elections.
The internal documents Motherboard obtained from a source talk specifically about three separate elections held in 2018. A second source, when presented with sections of one of the documents, said that it was indicative of others that the company distributes before elections around the world.
Facebook Watch Struggles to Deliver Hits or Advertisers
Sarah Frier says Facebook’s video tab is off to a slow start. Which seems crazy given the caliber of content you can find there.
While researcher Emarketer estimates that Facebook as a whole will take in nearly double YouTube’s $4.3 billion in video ad sales this year, it expects Watch to account for a single-digit percentage of that figure. “I don’t think it’s yet become a must-buy for brands,” says Abbey Klaassen, chief marketing officer at New York marketer 360i. “They are in a stiff competition for this kind of advertising and inventory.” Last summer, a year after Watch went live in the U.S., half of consumers hadn’t heard of it and three-quarters hadn’t used it, according to researcher Diffusion Group.
Thanks to addressable TV, budgets are starting to move away from Facebook
Related to the above story: Seb Joseph says that one of Facebook’s biggest initiatives — to siphon away advertising revenue from television — has been thwarted by the rise of “addressable” televisions. Which is to say, televisions that collect personal data about you and then serve you personalized ads:
“TV advertisers are moving away from Facebook in relatively big numbers,” said a senior agency executive at a holding group. “They’re putting that money back into TV because it’s more regulated and is starting to establish a proposition around targeted ads. TV viewing as a whole isn’t down, but the way we watch it live is and that’s something broadcasters are still figuring out how to monetize.”
Duracell spent some of 2018 investigating where to spend Facebook budgets across Europe after it discovered that a large portion of the video ads it bought on the social network in the U.K. weren’t being viewed. “People aren’t bothered about video on Facebook,” said a marketer at the CPG business, who was not authorized to speak to Digiday. Rather than try to focus on more expensive, but viewable impressions, Duracell decided to divert the money it had spent back to TV, a large portion of which went toward on-demand content.
Instagram denies viral claim that it hides most posts from users
Earlier this month I linked to this good John Herrman piece about how Facebook’s secrecy fuels paranoia about how the platform works. Here Shannon Liao unpacks the latest conspiracy theory to roil Instagram:
Over the past year, people have been posting claims that only 7 percent of Instagram followers see a user’s posts. The viral claim has made it as far as Pinterest and Facebook, where users, often advocating for small businesses on Instagram, ask people to like and comment “Yes” in order to improve a user’s “ranking” and gain more views.
Although the 7 percent post has been around since at least January 2018, it recently gained more traction, accumulating thousands of likes. Instagram has now come forward to debunk the claim in a Twitter thread, commenting, “We’ve noticed an uptick in posts about Instagram limiting the reach of your photos to 7% of your followers, and would love to clear this up.” It explains that the Instagram feed shows the posts in the order of the accounts that you tend to interact with the most. That means that you should see all of the posts from accounts you’re following eventually, assuming you have the patience to scroll all the way down.
Exclusive: Snapchat weighs what was once unthinkable
This is one of those where the headline overstates the significance by 100 percent. But: Angela Moon and Sheila Dang report that Snap is considering making some posts to the public Our Story permanent so as to be more useful to its news partners.
Snap announced the partnerships last year and will sign four more deals in the near future, said the person familiar with Snap’s plans. Initially, public stories would disappear after 30 days but now remain viewable for 90 days, according to Snap’s support website.
Some partners have said that the disappearing and anonymous nature of public stories makes them difficult to work with, the sources said. Some news organizations will not embed Snapchat stories into articles because the content eventually disappears, while others will not use them because they are unable to verify anonymous users’ Snapchat videos.
Tinder settles age discrimination lawsuit with $11.5 million worth of Super Likes
Tinder had the bright idea of charging older people more money to use its subscription service than young people, and after a class-action lawsuit argued that this was pretty much definitionally age discrimination, the company agreed to pay out $11.5 million … in Super Likes. Dami Lee reports:
Tinder was criticized when it service launched for its age tiers, which charged a $9.99 monthly fee for users under 29 and $19.99 for users 30 and up. At the time, Tinder defended the pricing model and compared the tiers to Spotify’s discounted rates for students in a statement to NPR. A Tinder spokesperson commented, “During our testing we’ve learned, not surprisingly, that younger users are just as excited about Tinder Plus, but are more budget constrained and need a lower price to pull the trigger.”
Launches
YouTube says it will recommend fewer videos about conspiracy theories
I neglected on Friday to include my own story — !!! — about a meaningful update to YouTube’s algorithm. There is a lot more to say about this, but I think the best move is to let this test unfold and see whether the results match the company’s rhetoric. (It is also still basically totally unclear what YouTube considers “borderline content,” save for three examples named in a blog post.)
YouTube said on Friday that it would promote fewer videos containing misinformation and conspiracy theories, a potentially significant step to reducing the service’s potential to drive viewers toward extremist content. In a test limited to the United States, the service said it would stop recommending what it calls “borderline content” — videos that come close to violating its community guidelines but stop just short.
While YouTube said the change would affect less than 1 percent of videos that are available on YouTube, the sheer volume of content available on the service suggests the effect could be significant.
My former colleague Tim Carmody just started a promising new newsletter that may be worth your time. Think The Interface, but instead of Facebook as the core subject, it’s Amazon. I’m in.
Takes
“The less we know, the more we’ll engage,” argues Colin Horgan, in a piece about the Covington Catholic conflict and what it tells us about how news spreads on social networks:
Information networks like Twitter and Facebook were created to feed us endless supplies of information. They sold us the promise of grasping, at any moment, all the world’s information from all possible perspectives. We accepted these platforms happily, assuming we would know what to do with all the thoughts and ideas they provide.
We were wrong. Rather than bringing us a clearer picture of our world, platforms have left us utterly and completely bewildered. Yet, instead of questioning social media’s original promise, we carry on, mistakenly believing the solution to this problem will come with even more information. We are caught in a contextual death spiral — a bottomless gyre in which we tumble forever disoriented, helplessly drinking water to save ourselves from drowning.
Ben Thompson makes an unpopular but necessary point about the relationship between Facebook, Google, and the media business:
While I know a lot of journalists disagree, I don’t think Facebook or Google did anything untoward: what happened to publishers was that the Internet made their business models — both print advertising and digital advertising — fundamentally unviable. That Facebook and Google picked up the resultant revenue was effect, not cause. To that end, to the extent there is concern about how dominant these companies are, even the most extreme remedies (like breakups) would not change the fact that publishers face infinite competition and have uncompetitive advertising offerings.
Mark Zuckerberg’s WSJ op-ed was a message to would-be regulators: Hands off our ad business
Kurt Wagner argues that Zuckerberg’s recent op-ed in the Journal was intended to dissuade regulators from involving themselves too much in Facebook’s affairs:
There are a lot of people who want to regulate Facebook. And regulators, both in the United States and in Europe, probably read the Journal. That likely includes politicians who want to pass laws that would limit Facebook’s ad business, or regulators like those who work at the FTC.
Zuckerberg is bringing his argument for why Facebook doesn’t need to be babysat — “we give people complete control” — to a news outlet that should reach the people who might disagree.
And finally …
Often people will ask me, Casey, who was the first social-media influencer? And so I want to thank Pope Francis for settling the matter once and for all.
With her “yes”, Mary became the most influential woman in history. Without social networks, she became the first “influencer”: the “influencer” of God. #Panama2019
— Pope Francis (@Pontifex) January 27, 2019
Certainly I will never forget when Mary, having revealed the Christ child to the world, followed up with “Smash that like button fam and don’t forget to subscribe!”
Talk to me
Send me tips, comments, questions, and your nominations for the Facebook oversight board: casey@theverge.com.
https://www.theverge.com/2019/1/29/18201472/facebook-oversight-board-social-science-one-power