Meta disrupted China-based propaganda machine before it reached many Americans

Meta disrupted China-based propaganda machine before it reached many Americans

China’s ability to influence American politics by manipulating social media platforms has been a topic of much scrutiny ahead of the midterm elections, and this week has marked some progress toward mitigating risks on some of the most popular US platforms.

US President Joe Biden is currently working on a deal with China-based TikTok—often regarded as a significant threat to US national security—with the one goal of blocking potential propaganda or misinformation campaigns. Now today, Meta, owner of Facebook and Instagram, shared a report detailing the steps it took to remove the first “Chinese-origin influence operation” that Meta has identified attempting “to target US domestic politics ahead of the 2022 midterms.”

In the press release, Meta Global Threat Intelligence Lead Ben Nimmo joined Meta Director of Threat Disruption David Agranovich in describing the operation as initiated by a “small network.” They said that between fall 2021 and September 2022, there were four “largely separate and short-lived” efforts launched by clusters of “around half a dozen” China-based accounts, which targeted both US-based conservatives and liberals using platforms like Facebook, Instagram, and Twitter.

In total, Meta removed “81 Facebook accounts, eight pages, one group, and two accounts on Instagram.” Meta estimated approximately 250 accounts joined the group, 20 accounts followed one or more Pages, and fewer than 10 accounts followed one or both Instagram accounts.

“This was the first Chinese network we disrupted that focused on US domestic politics ahead of the midterm elections,” the press release said. Previously, Meta had only disrupted Chinese networks that were working to influence opinions on US politics held by audiences outside the US.

When Meta monitors this type of activity, which it calls “coordinated inauthentic behavior,” it says in its report that it’s looking for fake accounts intentionally manipulating public debate. The bad actors do this by coordinating actions of multiple fake accounts “to mislead others about who they are and what they are doing.”

Meta policy dictates that this type of moderation is about monitoring account behavior, not the content of posts. Examples in the report include fake accounts posting memes targeting the left by alleging that the National Rifle Association of America paid off Senator Marco Rubio (R-Fla.) and the right by depicting a tentacled Biden gripping the world bearing nukes and machine guns. What gets an account removed is not, Meta said, “what they post or whether they’re foreign or domestic,” but whether the network would collapse without the fake accounts propping it up.

A Meta spokesperson told Ars that it focuses on “violating deceptive behavior, not content,” to take down covert influence operations because “these networks typically post content that isn’t provably false, but they rather aim to mislead people about who’s behind it and what they are doing.”

In the press release, Nimmo and Agranovich summed up the extent of the Chinese network’s reach and how well Meta worked to detect its deceptive behaviors, writing: “Few people engaged with it, and some of those who did called it out as fake. Our automated systems took down a number of accounts and Facebook Pages for various Community Standards violations, including impersonation and inauthenticity.”

Other threats detected

In the same report, Meta described a takedown of a much larger instance of “coordinated inauthentic behavior” originating from Russia.

Described as the largest Russian network of its kind that Meta has “disrupted since the war in Ukraine began,” this second operation targeted users based in “primarily Germany, France, Italy, Ukraine, and the UK.” Its online presence spanned 1,633 Facebook accounts, 703 Facebook pages, one Facebook group, and 29 accounts on Instagram.” The reach was limited to 4,000 accounts following at least one page, fewer than 10 accounts joining the group, and 1,500 accounts following at least one Instagram account. The operation also invested $105,000 in Facebook and Instagram ads, “paid primarily in US dollars and euros.”

The Russian network began operating in May, Meta reported, by launching more than 60 websites “carefully impersonating legitimate websites of news organizations in Europe,” such as Spiegel and The Guardian. It’s concerning because of the attention to detail mimicked authoritative news sites and translated articles into different languages. The operation relied on Facebook, Instagram, petitions on Change.org, Twitter, YouTube, and other social networks in its attempt to spread its fraudulent information.

German investigative journalists tipped off Meta to the problem, and when Meta tried to block domains, the network “attempted to set up new websites, suggesting persistence and continuous investment in this activity across the Internet.”

“This is the largest and most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine,” Nimmo and Agranovich wrote in the press release. “It presented an unusual combination of sophistication and brute force. The spoofed websites and the use of many languages demanded both technical and linguistic investment.”

But while Meta considered the Russian network “unusual,” a report from the Stanford Internet Observatory (SIO) Cyber Policy Center released last month described the majority of these tactics as common.

For their report, SIO evaluated Meta and Twitter data covering a period of five years of pro-Western covert influence operations, which the platforms had already jointly removed.

To complete its analysis, SIO worked with the social media analytics firm Graphika to identify “an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that used deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia,” as well as narratives that heavily criticized Russia, China, and Iran.

Twitter’s dataset “covered 299,566 tweets by 146 accounts between March 2012 and February 2022,” but Meta’s was limited to “39 Facebook profiles, 16 pages, two groups, and 26 Instagram accounts active from 2017 to July 2022.” Combining the datasets left SIO with five years’ worth of cross-platform activities from the influence operations to analyze.

Hoping to better understand “how different actors use inauthentic practices to conduct online influence operations,” SIO found that these operations have a limited range of tactics, employing the same—mostly unsuccessful strategies—over and over.

“The assets identified by Twitter and Meta created fake personas with GAN-generated faces, posed as independent media outlets, leveraged memes and short-form videos, attempted to start hashtag campaigns, and launched online petitions: all tactics observed in past operations by other actors,” SIO’s report said.

In Meta’s most recent report, tactics included relying on “crude ads,” generating fake profiles, impersonating journalists, leveraging memes, launching online petitions, and posting comments on influential accounts for maximum visibility.


Facebook users sue Meta for bypassing beefy Apple security to spy on millions

Facebook users sue Meta for bypassing beefy Apple security to spy on millions

After Apple updated its privacy rules in 2021 to easily allow iOS users to opt out of all tracking by third-party apps, so many people opted out that the Electronic Frontier Foundation reported that Meta lost $10 billion in revenue over the next year.

Meta’s business model depends on selling user data to advertisers, and it seems that the owner of Facebook and Instagram sought new paths to continue widely gathering data and to recover from the suddenly lost revenue. Last month, a privacy researcher and former Google engineer, Felix Krause, alleged that one way Meta sought to recover its losses was by directing any link a user clicks in the app to open in-browser, where Krause reported that Meta was able to inject a code, alter the external websites, and track “anything you do on any website,” including tracking passwords, without user consent.

Now, within the past week, two class action lawsuits [1] [2] from three Facebook and iOS users—who point directly to Krause’s research—are suing Meta on behalf of all iOS users impacted, accusing Meta of concealing privacy risks, circumventing iOS user privacy choices, and intercepting, monitoring, and recording all activity on third-party websites viewed in Facebook or Instagram’s browser. This includes form entries and screenshots granting Meta a secretive pipeline through its in-app browser to access “personally identifiable information, private health details, text entries, and other sensitive confidential facts”—seemingly without users even knowing the data collection is happening.

The most recent complaint was filed yesterday by California-based Gabriele Willis and Louisiana-based Kerreisha Davis. A lawyer from their legal team at Girard Sharp LLP, Adam Polk, told Ars that it was an important case to stop Meta from getting away with concealing ongoing privacy invasions. In the complaint, the legal team pointed to prior Meta misdeeds in gathering user information without consent, noting for the court that a Federal Trade Commission investigation resulted in a $5 billion fine for Meta.

“Merely using an app doesn’t give the app company license to look over your shoulder when you click on a link,” Polk told Ars. “This litigation seeks to hold Meta accountable for secretly monitoring people’s browsing activity through its in-app tracking even when they haven’t allowed Meta to do that.”

Meta did not immediately respond to Ars’ request for comment. Krause told Ars he prefers not to comment.

Meta allegedly secretly tracks data

According to the complaints, which rely on the same facts, Krause’s research “revealed that Meta has been injecting code into third-party websites, a practice that allows Meta to track users and intercept data that would otherwise be unavailable to it.”

To investigate the potential privacy issue, Krause built a website called inappbrowser.com, where users could “detect whether a particular in-app browser is injecting code into third-party websites.” He compared an app like Telegram, which doesn’t inject JavaScript code into third-party websites to track user data in its in-app browser, with the Facebook app by tracking what happens in the HTML file when a user clicks a link.

In the case of tests run on Facebook and Instagram apps, Krause reported that the HTML file clearly showed that “Meta uses JavaScript to alter websites and override its users’ default privacy settings by directing users to Facebook’s in-app browser instead of their pre-programmed default web browser.”

The complaints note that this tactic of injecting code seemingly employed by Meta to “eavesdrop” on users was originally known as a JavaScript Injection Attack. The lawsuit defines that as instances where “a threat actor injects malicious code directly into the client-side JavaScript. This allows the threat actor to manipulate the website or web application and collect sensitive data, such as personally identifiable information (PII) or payment information.”

“Meta now is using this coding tool to gain an advantage over its competitors and, in relation to iOS users, preserve its ability to intercept and track their communications,” the complaint alleges.

According to the complaints, “Meta acknowledged that it tracks Facebook users’ in-app browsing activity” when Krause reported the issue to its bug bounty program. The complaints say that Meta also confirmed at that time that it uses data collected from in-app browsing for targeted advertising.


Garante Privacy a Facebook: quali iniziative per le elezioni in Italia?

Il Garante per la privacy ha inviato a Facebook Italia (Meta) una richiesta urgente di chiarimenti in relazione alle attività intraprese dal social network riguardo alle prossime elezioni per il rinnovo del Parlamento italiano.

Meta, la società che gestisce Facebook, ha infatti pubblicamente annunciato di aver avviato una campagna informativa pubblicando promemoria. La campagna, indirizzata espressamente agli utenti maggiorenni italiani, sarebbe volta a contrastare le interferenze e rimuovere i contenuti che disincentivano al voto. Fra le iniziative intraprese da Meta vi sarebbero: la collaborazione con organizzazioni indipendenti di fact-checking e l’utilizzo di un Centro operativo virtuale per identificare in tempo reale potenziali minacce.

La replica di Meta

La replica del social media non si è fatta attendere: “Gli strumenti elettorali lanciati in Italia sono stati espressamente progettati per rispettare la privacy degli utenti e conformarsi al GDPR. Stiamo collaborando con l’Autorità Garante per la privacy per spiegare come lavoriamo per contribuire a proteggere l’integrità delle elezioni italiane e per aiutare le persone ad accedere ad informazioni elettorali affidabili provenienti dal Ministero dell’Interno“, ha detto un portavoce Meta

Il Garante, anche alla luce della precedente sanzione comminata a Facebook per il caso “Cambridge Analytica” e il progetto “Candidati”, avviato per le elezioni del 2018, ricorda che è necessario prestare particolare attenzione al trattamento di dati idonei a rivelare le opinioni politiche degli interessati e al rispetto della libera manifestazione del pensiero.

Facebook dovrà fornire informazioni puntuali sull’iniziativa intrapresa; sulla natura e modalità dei trattamenti di dati su eventuali accordi finalizzati all’invio di promemoria e la pubblicazione degli “adesivi” informativi (pubblicati anche su Instagram – Gruppo Meta); sulle misure adottate per garantire, come annunciato, che l’iniziativa sia portata a conoscenza solo di persone maggiorenni.


Facebook reverses permanent ban on Holocaust movie after outcry

Writer-director Joshua Newton and actor Roy Scheider.
Enlarge / Writer-director Joshua Newton and actor Roy Scheider.

This September, British filmmaker Joshua Newton prepared to rerelease his 2009 film Beautiful Blue Eyes. The 2022 premiere was important to Newton, as he’d waited more than a decade to finally share with the world a version of the movie that was previously lost.

Roy Scheider starred in Newton’s movie, and it ended up being his final role. Scheider—who is best known for playing the beloved Jaws police chief who says, “You’re gonna need a bigger boat”—portrayed a New York cop who reunites with his estranged son and tracks down the Nazi responsible for murdering his family members during the Holocaust. Because a camera malfunctioned and damaged some of Newton’s footage and Scheider died while filming, Newton previously thought he’d lost the edit he liked best. But then more than a decade passed, and Newton told Rolling Stone that AI technology had finally advanced enough that the filmmaker could repair lost film frames.

Excited to put this cut of his thriller in front of audiences, Newton prepared to promote the rerelease on Facebook. But in the days leading up to the premiere, Newton told Rolling Stone that he received an email informing him that in a rare turn of events, “Facebook had banned the filmmakers from promoting or advertising” the movie.

Following Rolling Stone’s report, Meta told Ars the Facebook owner reviewed the case and decided to reverse the ban.

“We reviewed the ads and page in question and determined that the enforcement was made in error, so we lifted the restriction,” a Meta spokesperson told Ars.

Why was the film banned?

Facebook moderators told Newton that his film was banned because the company’s ad policy restricts content that “includes direct or indirect assertions or implications about a person’s race.” Because Newton’s movie in the US is titled Beautiful Blue Eyes, Facebook moderators banned its promotion in Facebook ads, seemingly reading the title as hinting at race.

Newton told Rolling Stone that his parents survived the Holocaust and that the movie was based on his late father’s life. His own son, Alexander Newton, featured in the film and performed its titular song. Alexander tweeted that his song was also banned on Facebook and Instagram.

“This is the action of haters—and there are sadly many in our society—who seek to damage the film in order to trivialize the Holocaust,” Joshua Newton told Rolling Stone. “Surely, Mark Zuckerberg did not intend this to happen.”

Newton’s production company did not immediately respond to Ars’ requests for comment since Meta lifted the ban.

How Facebook made its decision

According to Newton, after he received the email informing him that he would not be allowed to promote his film on Facebook, he immediately appealed the decision. Rolling Stone shared the message that Facebook sent in response to Newton’s appeal, informing the filmmaker that advertising and even trailers were “permanently restricted.”

“After a requested review of your Facebook account, we confirmed it didn’t comply with our Advertising Policies or other standards. You can no longer advertise using Facebook Products. This is our final decision,” the message said.

Newton’s movie is now being screened in hundreds of movie theaters that are also simultaneously rereleasing Jaws in IMAX. It’s a double feature that seems designed as a celebration of Scheider through his best-known and final roles. Newton said he is disgusted that Facebook has seemingly limited the reach of his film. Rolling Stone noted that Facebook has rarely taken such action in content reviews of movies, finding one other instance in 2020 when a pro-Trump documentary was censored ahead of the election.

“Every decent and sane human being on this planet should be alarmed by Meta-Facebook’s ban on the advertising of a Holocaust-related film,” Newton told Rolling Stone. “Mark Zuckerberg has created a monster that has no oversight. It’s one thing to be flagged by an algorithm. It’s another for Meta-Facebook employees to review the flag and uphold it, knowing full well that the title is not discriminatory and that the film is Holocaust-related.”

Newton told Rolling Stone that he is considering legal action against Facebook, but Meta told Ars that all restrictions on his movie’s Facebook page and ads have been lifted.

Increased scrutiny on Facebook censorship

Although resolved, Newton’s case adds another data point in the ongoing controversy over invalid censorship that has taken place on Facebook for years, with the government increasingly pressuring Facebook to beef up content moderation and the general public mostly confused over why and when certain content is restricted.

For years, the American Civil Liberties Union has taken the stance that Facebook shouldn’t censor any offensive speech—a policy that would ensure filmmakers like Newton never have to worry about how to promote their movies without Facebook.

Recently, there has been some new information shared publicly on Facebook content moderation decisions. The Wall Street Journal last week posted an opinion piece from its Editorial Board after emails shared in court showed how Facebook and other tech companies directly coordinate with federal officials to decide what gets removed. The WSJ board called on Facebook to release more emails so the public can better understand the government’s role in Facebook’s moderation practices.


To defeat FTC lawsuit, Meta demands 100+ rivals share biggest trade secrets

To defeat FTC lawsuit, Meta demands 100+ rivals share biggest trade secrets

Several years after Facebook-owner Meta acquired WhatsApp and Instagram, the Federal Trade Commission launched an antitrust lawsuit that claimed that through these acquisitions, Meta had become a monopoly. A titan wielding enormous fortune over smaller companies, the FTC said Meta began buying or burying competitors in efforts that allegedly blocked rivals from offering better-quality products to consumers. In this outsize role, Meta stopped evolving consumer preferences for features like greater privacy options and stronger data protection from becoming the norm, the FTC claimed. The only solution the FTC could see? Ask a federal court to help them break up Meta and undo the damage the FTC did not foresee when it approved Meta’s acquisitions initially.

To investigate whether Meta truly possesses monopolistic power, both Meta and the FTC have subpoenaed more than 100 Meta competitors each. Both hope to clearly define in court how much Meta dominates the market and just how negatively that impacts its competitors.

Through 132 subpoenas so far, Meta is on a mission to defend itself, claiming it needs to gather confidential trade secrets from its biggest competitors—not to leverage such knowledge and increase its market share, but to demonstrate in court that other companies can compete with Meta. According to court documents, Meta’s so hungry for this background on its competitors, it says it plans to subpoena more than 100 additional rivals, if needed, to overcome the FTC’s claims.

Meta is asking its competitors for a wide range of insights, from their best-performing features to the names of their biggest advertisers. It wants to see all business receipts, which to its competitors is seemingly turning the antitrust litigation into a business opportunity for Meta to find out precisely how other companies attract users, scale products, and gauge success.

Among rivals already subpoenaed are Twitter, TikTok owner ByteDance, Reddit, Pinterest, LinkedIn, and Snap. More requests could be made in the coming years, though, before the discovery for both sides concludes on January 5, 2024.

Snap, others oppose “overly broad” subpoenas

Unsurprisingly, nobody wants to hand over confidential trade secrets to Meta. And because the only competitor that the FTC named in the antitrust litigation was Snap, Bloomberg reports that Snap has become one of the most vocal opponents of Meta’s “overly broad” subpoenas.

In a court document, Snap lawyers said that what Meta wanted was effectively access to Snap’s “competitive playbook,” seeking “materials on every product and nearly every aspect of Snap’s business, with a time range that spans almost Snap’s entire existence.”

“This is exactly the type of information Meta would use to further harm Snap competitively,” Snap’s lawyers said.

Meta claimed the FTC sought similar information from Snap, but Snap claimed the FTC’s requests were “far narrower.”

Snap’s major concern is that Meta has a history of duplicating rival products’ features, including its controversial new feature dominating its Facebook and Instagram feeds, TikTok-like videos called Reels. Both Snap and ByteDance responded to subpoenas by asking Meta to limit the scope of documents requested and prevent Meta’s in-house counsel from accessing any documents containing sensitive trade secrets, which could then ostensibly be shared with Meta employees. Court documents show that Snap withheld “the vast majority of the documents Meta has requested,” and Meta rejected Snap’s offers to share confidential business information only with Meta’s outside counsel.

For Snap, the subpoena would be easier to swallow if it knew Meta couldn’t act on any information shared. Meta responded to the outside counsel request by noting that the court had already rejected “the same request from Snap” and reiterating that its in-house counsel is “not involved in competitive decision-making.”

Snap did not immediately respond to Ars’ request for comment.

Today, Bloomberg reported that a California federal court will decide approximately how many documents and, therefore, how many trade secrets Snap should have to reveal to help Meta build its defense against the FTC.

Snap is hoping the court will “quash the subpoena in its entirety,” in part because the request is “massively overbroad and unduly burdensome.” Snap claims Meta is asking Snap “to reconstruct virtually every decision Snap has made.” They also say Meta has not demonstrated a “substantial need” to justify “all the confidential commercial information that it demands.” Instead, they accuse Meta of “clear fishing expeditions” with “irrelevant” document requests, such as seeking information on the FTC’s investigation into Snap’s disappearing messages feature.

Meta wants the court to order Snap to produce all requested documents, partly because it claims that such a broad request “is common, and appropriate, in large antitrust cases.” It also claims that Snap is “not a truly disinterested non-party” in the antitrust litigation and that delays in producing documents were “designed to prejudice Meta’s defense.” As Bloomberg reports, as legal challenges to Meta’s subpoenas from other social media companies mount, a Meta spokesperson told Ars that all these competitors resisting compliance with subpoenas appears to somewhat substantiate Meta’s claims that the industry remains competitive despite Meta’s ownership of Facebook, WhatsApp, and Instagram.

“Meta competes vigorously with many companies to help people share, connect, communicate, or simply be entertained,” Meta spokesperson Christopher Sgro told Bloomberg and Ars. “As a natural step in preparing our defense to the FTC’s lawsuit, we have served subpoenas on companies with which we compete or which we believe have other information relating to the FTC’s claims.”


Ted Cruz blows up Congress’ plan to save journalism by making Big Tech pay up

Ted Cruz blows up Congress’ plan to save journalism by making Big Tech pay up

Both Google and Meta have taken steps to start paying US publishers for aggregating their news content, but neither tech giant has yet found a perfect solution that would fairly compensate publishers and potentially help combat the mass shuttering of newsrooms across America. The Wall Street Journal reported that Facebook stopped its program paying US publishers in July, and more recently, media outlets haven’t been thrilled by terms of Google’s “News Showcase” program, either, and were mostly resisting partnership.

In the latter case, WSJ reported that some media outlets were holding out on joining the News Showcase for a very specific reason. They were waiting to see what happened with a new bill—the Journalism Competition and Preservation Act—which seemed like a better deal. If passed, the JCPA would force Google and Meta to pay US news publishers collectively bargaining for fair payment. However, now, Senator Ted Cruz (R-Texas) has introduced a new amendment to the JCPA which, the Chicago Tribune reports, was narrowly approved this week. And Cruz’s new stipulation may have effectively killed the previously bipartisan bill by diminishing Democratic support, thus crushing US publishers’ supposed dream deal.

What Cruz has suggested is an amendment to prohibit tech companies and news organizations from using the collective bargaining tool to collude on efforts to censor content. While the bill itself waives an antitrust agreement so that news organizations can collectively bargain with tech companies, Cruz says that this key antitrust exemption would not apply if during the negotiation process anyone “engages in any discussion of content moderation.”

After an 11 to 10 vote, the Cruz amendment was approved.

The Chicago Tribune reported that one of the bill’s co-sponsors, Amy Klobuchar (D-Minn.), said the amendment divided Congress, and Democrats would have to walk away from the bill. She’s concerned that the Cruz amendment would provide an escape route for Google or Meta to avoid joint negotiations by simply raising discussions around content moderation “at the first opportunity.”

Cruz seemed to suggest that preventing censorship risks was more critical than preventing platforms from shutting down joint negotiations.

“What happened today was a huge victory for the First Amendment and free speech,” Cruz said in a statement to Ars. “Sadly, it is also a case study in how much the Democrats love censorship. They would rather pull their bill entirely than advance it with my proposed protections for Americans from unfair online censorship.”

Klobuchar, Google, and Meta did not immediately provide Ars with comments.

Criticisms of the JCPA

Since the pandemic started, The New York Times recently reported that 360 newspapers have closed. Before that, newsroom financial instability was just as bad, NYT says, with newspapers shuttering at a rate of two weekly. Largely to blame: a decline in advertising revenue in small newsrooms. Meanwhile tech giants like Google and Meta continued sucking up billions in advertising dollars—generating the greatest chunk of their vast wealth and for the most part, not paying newsrooms for money made from aggregating content.

In an effort to help save local journalism from extinction, Klobuchar joined with Senator John Kennedy (R-La.) and Representatives David Cicilline (D-R.I.) and Ken Buck (R-N.Y.) to introduce the Journalism Competition and Preservation Act. The goal was to push ad dollars back to news organizations by forcing Google and Meta to pay publishers for aggregation.

The Chicago Tribune reported that while some journalism organizations and free press advocates consider the bill a “lifeline,” others criticize the bill for “everything from the temporary antitrust exemption to undermining copyright law and fair use on the Internet.”

The Electronic Frontier Foundation, a nonprofit dedicated to defending civil liberties online, reported that the JCPA was not a “magic solution.” EFF opposes the JCPA in part because it mostly just creates more opportunities for more giants to get involved. Rather than helping small newsrooms, the law could encourage more large corporations to buy up more newspapers, lay off more staff, and replace even more news with click-bait.

On a larger scale, the notion of news publishers licensing links to tech companies “implies a sort of property right in links, an ownership of how information is shared,” EFF reported. “That has grave consequences for the entire Internet, which depends on the ability to link to information sources from far and wide. Linking isn’t copyright infringement, at least under current law. But the JCPA risks creating a new quasi-copyright law for linking, or even leading the courts to extend copyright law to cover some forms of linking.”

EFF did not immediately respond to Ars’ request for comment, but EFF experts have warned, “Creating an implicit right to control linking in any context won’t preserve journalism, it will let it rot away.”

Instead of focusing on enacting the JCPA, EFF reported that newsrooms could be better protected and made more profitable by promoting more digital ad competition by Congress passing the Competition and Transparency in Digital Advertising Act. Both bills are still being reviewed by committees.

While Congress debates the merits of propping up thousands of small US newsrooms through the JCPA and the Digital Advertising Act, Google’s News Showcase has yet to debut in the US. Most recently, some larger publishers did sign licensing deals, though. WSJ reported that Bloomberg and Reuters could make up to $3 million annually from multiyear deals, and WSJ parent News Corp projects future earnings of more than $100 million annually from multiyear deals with Google and other tech companies.


Zuckerberg avoids Cambridge Analytica deposition as Facebook agrees to settle

Zuckerberg avoids Cambridge Analytica deposition as Facebook agrees to settle

It’s been four years since users alleging harm caused by the Cambridge Analytica scandal sued Facebook (now Meta) for selling tons of easily identifying personal information to third parties, allegedly doing so even when users thought they had denied consent. In 2018, plaintiffs alleged in a consolidated complaint that Facebook acted in “astonishingly reckless” ways and did “almost nothing” to protect users from the potential harms of this “intentionally” obscured massive data market. The company, they said, put 87 million users at “a substantial and imminent risk of identity theft, fraud, stalking, scams, unwanted texts, emails, and even hacking.” And users’ only option to avoid these risks was to set everything on Facebook to private—so even friends wouldn’t see their activity.

Because of Facebook’s allegedly deceptive practices, plaintiffs said that “Facebook users suffered concrete injury in ways that transcend a normal data breach injury.” Plaintiffs had gotten so far in court defending these claims that Meta CEO Mark Zuckerberg was scheduled to take the stand for six hours this September, along with lengthy depositions scheduled for former Facebook Chief Operating Officer Sheryl Sandberg and current Meta Chief Growth Officer Javier Olivan. However, it looks like none of those depositions will be happening now.

On Friday, a joint motion was filed with the US District Court for the Northern District of California. It confirmed that the plaintiffs and Facebook had reached a settlement agreement that seems to have finally ended the class action lawsuit that Meta had previously said it hoped would be over by March 2023.

It’s not clear yet how much the settlement will cost Facebook—which has already paid billions in fines to the FTC—but there may be more information on Facebook sanctions in the next few days. Although the joint motion requested 60 days to draft a written settlement agreement, US district judge Vince Chhabria only granted the motion in part. Chhabria said he still expects all parties to “appear at the hearing on Friday, September 2 to discuss sanctions.”

Meta and Facebook’s legal team told Ars that it has no comment. The plaintiffs’ legal team did not immediately respond.

What did Facebook users want the court to decide?

Originally, the plaintiffs asked in their complaint that the court order a Facebook audit and disclosure, a change to Facebook’s default settings, “compensation for intrusions into privacy on an unprecedented level,” and damages for users “who did not understand what was taken from them and how Facebook has profited.” Additional relief was also requested.

Part of their complaint was that privacy settings that actually granted users control over their data were buried in the app, while privacy settings that did not offer the same control over data were made accessible. This, the plaintiffs alleged, was deceptive to users, who claim they weren’t even informed once Facebook learned about the illegal Cambridge Analytica data purchase.

Since 2018, Meta has changed some of Facebook’s policies, but plaintiffs alleged that “it has done so now only in the wake of regulatory and governmental outrage.” In addition to fines, the FTC imposed “significant requirements to boost accountability and transparency” in 2019.

Currently, Congress is considering legislation called the Digital Accountability and Transparency to Advance Privacy Act (aka the DATA Privacy Act), which would grant federal protections for users of popular web services. If passed, the law would force tech companies to provide “consumers with accessible notice of the business’ privacy practices.”

Until tech becomes better regulated, users are forced to place trust in companies to update their policies as privacy and security risks become known. On an earnings call earlier this year, Meta signaled that it is motivated to continue updating its policies. Leadership said that the company would be investing in more privacy-safe ways for advertisers to target marketing to users. That includes beta-testing with large advertisers a new “privacy enhancing” technology called Private Lift. This “measurement solution,” Meta says, adds “extra layers of privacy to limit the information that can be learned” by both advertisers and Meta while still allowing advertisers to effectively target users.


Loathsome anti-vax group run by RFK Jr gets Meta permaban—finally

Loathsome anti-vax group run by RFK Jr gets Meta permaban—finally

Yesterday, the anti-vaccine group the Children’s Health Defense celebrated the spread of poliovirus in New York, mocking health officials spreading awareness that polio is vaccine-preventable. Today, CHD reports that the group was also permanently banned from Facebook and Instagram yesterday. A screenshot of Meta’s notification in its press release says that the ban is due to CHD’s practice of spreading “misinformation that could cause physical harm.”

A Meta spokesperson tells Ars that Meta “removed the Instagram and Facebook accounts in question for repeatedly violating our COVID-19 policies.”

CHD says the ban came “without warning,” cutting the anti-vax group off from hundreds of thousands of followers on both social media platforms. Denying allegations that the group spreads misinformation, CHD suggested instead the ban is connected to CHD’s lawsuit against Meta that questions the validity of how Facebook and the Centers for Disease Control and Prevention label health misinformation. The group’s legal counsel in that lawsuit, Roger Teich, suggested that the ban was improper.

“Censorship is not only unconstitutional, it’s un-American,” Teich said in the press release.

CHD founder Robert F. Kennedy Jr.—who, as AP reported, likens getting vaccines to “drinking Kool-Aid”—seemed to suggest that Meta was retaliating against CHD on behalf of the CDC: “Facebook is acting here as a surrogate for the federal government’s crusade to silence all criticism of draconian government policies,” Kennedy said in the press release.

CHD directed Ars to the press release for comments. Teich did not immediately respond to Ars’ requests to comment.

Meta’s move comes after years of tension with CHD over its content. Back in 2019, researchers identified CHD as the single leading source of anti-vax ads on Facebook. It has taken the entire pandemic to get to the point of banning the account.

Meta’s not the only social media platform that has pushed back on CHD’s misinformation. Last year, YouTube banned CHD, among other anti-vax accounts, for claiming the COVID-19 vaccine was ineffective. In 2021, Meta also made headlines when Instagram banned Kennedy’s account “for repeatedly sharing debunked claims about the coronavirus or vaccines.”

Although CHD said it had no warning, Vice reported that CHD shared in an email newsletter that this week’s permanent ban followed a temporary 30-day ban: “​​Despite not posting content on Facebook for the past 21 days due to an existing 30-day ban, and constantly self-censoring our content in an attempt to avoid continual shadow-banning and censorship, both pages were abruptly de-platformed. Removing CHD accounts is evidence of a clearly orchestrated attempt to stop the impact we have during a time of heightened criticism of our public health institutions.”

Meta tells Ars that before any ban, Meta uses “a strike system to count violations” so it can hold accountable “those who continue violating” any Meta policies. Accounts are restricted or disabled “based on the number and nature of the strikes it accrues.”


TikTok vows to close loophole letting users skirt ban on political ads

TikTok vows to close loophole letting users skirt ban on political ads

As TikTok’s popularity and earnings soar, the company has decided to crack down on political content creators—sometimes with thousands of followers—who violate the app’s policies against paid political ads. TikTok says it has been aware of the problem since 2020, but it became an issue of public concern in 2021. That’s when the Washington Post and The Mozilla Foundation uncovered TikToks from both left- and right-wing content creators that appeared to be violating FTC guidelines, which require, at a minimum, that posts must be marked with an “#ad” hashtag.

TikTok has always left it up to content creators to self-disclose when they conduct deals with business partners off-site. However, in June 2021, the company made it easier to flag posts as ads (or “branded content”) in an effort to encourage more self-disclosure. Mozilla considered this a step in the right direction, but it recommended that TikTok work harder ahead of the next elections to promptly remove undisclosed paid political ads.

This week, TikTok appears to have taken that advice, with its head of US safety, Eric Han, announcing that the company will remove any content that violates TikTok’s rules on paid political ads. To prepare creators for stricter enforcement, Han said TikTok will post an educational series in its Creator Portal and host briefings with content creators to ensure “the rules of the road are abundantly clear when it comes to paid content around elections.”

“TikTok does not allow paid political ads, and that includes content influencers are paid to create,” Han wrote. “We work to educate creators about the responsibilities they have to abide by [in] our Community Guidelines and Advertising policies as well as FTC guidelines.”

The content creator loophole

TikTok banned paid political ads on its platform in 2019, but according to Reuters, content creators became a “loophole” that campaign strategists could exploit to spread messaging on the platform. It’s not just a TikTok problem, either. The Washington Post reported in 2020 that conservative nonprofit Turning Point USA “was recruiting and paying young people” on Facebook and Twitter “to pump out false messages about voter fraud, the coronavirus, and Joe Biden in order to bolster Trump’s re-election campaign.”

Mozilla said that it has become “urgent” for popular platforms like TikTok to change how they moderate content, or they risk ongoing undetected violations by bad actors. In 2018, for example, Mozilla exposed tax filings showing that TPUSA spent millions of dollars developing services that include “influencer media programs” with hundreds of content creators. The tax documents indicated that these funds are dedicated to ensuring the organization’s “long-term vitality.” Mozilla also found a small grassroots progressive political action committee called “The 99 Problems” that it said paid creators to spread pro-Biden messaging without disclosing that the posts were sponsored.

TPUSA and The 99 Problems didn’t immediately respond to questions from Ars asking if anything has changed about how they run influencer media programs.

In a press briefing described in a Reuters report, Hans said that TikTok is adding internal teams to help spot content violations. TikTok will also continue depending on outside reports from researchers and media outlets to track the scope of the issue. Experts say TikTok should share more data to support those efforts.

This week, TikTok also announced that it would work with accredited fact-checking organizations to prevent election misinformation from spreading in both unmarked ads and other general posts. Han said that TikTok’s policy is to restrict any content that is being fact-checked from its recommended “For You” pages.

TikTok did not immediately respond to Ars’ request for comment.


Lawsuits: OnlyFans bribed Instagram to put creators on “terrorist blacklist” [Updated]

Lawsuits: OnlyFans bribed Instagram to put creators on “terrorist blacklist” [Updated]

(Update, 5:27 pm ET: A GIFCT spokesperson clarified how the “blacklist”—or more accurately, in its terms, its terrorist content database—works to log terrorist activity between different online platforms. She says only videos and images are currently hashed, and nothing gets automatically removed from other platforms. Instead, once content is hashed, each platform considers things like the type of terrorist entity it is or the severity of the content and then weighs those measurements against its own policies to decide if it qualifies for removal or content advisory labels.

The GIFCT spokesperson also noted that Instagram accounts are not hashed, only Instagram images and videos, and there is no “blacklist” of users, although GIFCT analyzes who produces the content the organization hashes. The database records hashes to signal terrorist entities or terrorism content based on the United Nations sanctions list of terrorist entities from the UN Security Council. And all that content remains in the database unless a platform/GIFCT member like Meta uses a GIFCT feedback tool that was introduced in 2019 to flag the content as not qualifying as terrorist content. The feedback tool can also be used to recommend modified labels of content. Currently, that’s the only way to challenge content that gets hashed. GIFCT members also maintain active discussions on content moderation with GIFCT’s “centralized communications mechanism.” In these discussions, the spokesperson says none of the complaints raised in the lawsuit have been mentioned by members.

About two years ago, GIFCT became an independent nonprofit, and since then, it has released annual transparency reports that provide some insights into the feedback it receives. The next transparency report is due in December.)

Original story: Through the pandemic, OnlyFans took over the online adult entertainment world to become a billion-dollar top dog, projected to earn five times more net revenue in 2022 than in 2020. As OnlyFans’ business grew, content creators on rival platforms complained that social media sites like Facebook and Instagram were blocking their content but seemingly didn’t block OnlyFans with the same fervor, creating an unfair advantage. OnlyFans’ mounting success amid every other platform’s demise seemed to underscore its mysterious edge.

As adult entertainers outside of OnlyFans’ content stream looked for answers to their declining revenue, they realized that Meta had not only allegedly targeted their accounts to be banned for posting supposedly inappropriate content but seemingly also for suspected terrorist activity. The more they dug into why they had been branded as terrorists, the more they suspected that OnlyFans paid Meta to put the mark on their heads—resulting in account bans that went past Facebook and Instagram and spanned popular social media apps across the Internet.

Now, Meta has been hit with multiple class action lawsuits alleging that senior executives at Meta accepted bribes from OnlyFans to shadow-ban competing adult entertainers by placing them on a “terrorist blacklist.” Meta claims the suspected scheme is “highly implausible,” and that it’s more likely that OnlyFans beat its rivals in the market through successful strategic moves, like partnering with celebrities. However, lawyers representing three adult entertainers suing Meta say the owner of Facebook and Instagram will likely have to hand over documents to prove it.

Meta and its legal team did not immediately respond to Ars’ request for comment, but in their motion to dismiss, Meta says that even if “a vast and sophisticated scheme involving manipulation of automated filtering and blocking systems” was launched by Meta employees, Meta would not be liable. As a publisher, Meta says it is protected by the First Amendment and the Communications Decency Act to moderate content created by adult entertainment performers how it sees fit. The tech company also says it would be against Meta’s interests to manipulate algorithms to drive users off Facebook or Instagram to OnlyFans.

Fenix International Limited owns OnlyFans and also filed a motion to dismiss, claiming that the lawsuit had no merits and that OnlyFans has the same protected rights as a publisher as Meta. Neither Fenix nor its legal team immediately responded to Ars’ request for comment.

A spokesperson for the legal team representing the adult entertainers, Millberg, provided documents filed last week in response to both companies’ motions to dismiss, which Millberg considers “meritless.” They say that the First Amendment and CDA protections cited by Meta don’t apply, because plaintiffs aren’t suing over their content being blocked, but over allegations that the companies engaged in unfair business practices and “a scheme to misuse a terrorist blacklist.”

Rather than dismiss their complaint, plaintiffs asked the judge to reject the motions, which by law would ordinarily prevent discovery in the case, or if the judge is persuaded by the motions to dismiss, to allow for limited discovery before deciding. A Millberg spokesperson says this is just the start of a long legal process, and they expect that their request for discovery will be granted. That would mean that Meta and OnlyFans would have to share evidence to disprove the claim, which as of yet, neither has.

It’s likely that any judgment on the companies’ motions to dismiss will influence how the companies defend against other lawsuits. For the Millberg class action lawsuit, a hearing is scheduled in the Northern District of California on September 8. The judge will be William Alsap, who some may recall in 2014 received media attention for siding with a woman who contested the federal government’s no-fly policy and recommended a process of correcting mistakes, so that the US is not labeling people as terrorists who aren’t. Adult entertainers are hoping that he’ll be equally sympathetic in helping them to remove that unearned label.

What is this terrorist watch list?

It’s not just adult entertainers suing. Rival adult entertainment platforms, FanCentro and JustFor.Fans, are also suing, claiming that their social media traffic dropped so dramatically that “it could not have been the result of filtering by human reviewers.” Instead, they allege that Fenix relied on a “secret Hong Kong subsidiary into offshore Philippines bank accounts set up by the crooked Meta employees” to pay off Meta and tank its rivals’ traffic.

To achieve maximum effect in its alleged mission to delete rival adult content from the Internet, Fenix allegedly asked Meta to add 21,000 names and social media accounts to a terrorist blacklist that would ensure their content wouldn’t be displayed on Facebook, Instagram, Twitter, or YouTube.

The Global Internet Forum to Counter Terrorism (GIFCT) was co-founded in 2017 by owners of major social media platforms and other companies “to prevent terrorists and violent extremists from exploiting digital platforms.” Whenever a content moderation system flags an account on one platform, a digital fingerprint called a hash is shared with all the other platforms so that the image, video, or post won’t show up anywhere.

Critics, like the Electronic Frontier Foundation, have said the practice limits users’ rights to free expression across the Internet any time a post gets mistakenly flagged, with little recourse to get off the terrorist list, or even confirm if they’re on it. The GIFCT told the BBC that it continually works to “enhance transparency and oversight of the GIFCT hash-sharing database” by extensively engaging with stakeholders.

GIFCT did not immediately respond to Ars’ request for comment. The Millberg legal team says that it wants to begin discovery in September by asking Meta and GIFCT to share records that would either prove or disprove whether 21,000 Instagram accounts were improperly branded as terrorists.