1

Stanzione (Garante Privacy): “Tutelare la persona dall’uso distorsivo della tecnica”

Si è aperto questa mattina l’evento State of Privacy ’22 organizzato del Garante per la privacy a Napoli, al Museo Nazionale ferroviario di Pietrarsa con l’obiettivo di individuare idee e soluzioni in risposta alle imminenti sfide che attendono la protezione dei dati personali.

L’evento, in occasione dei 25 anni dell’Autorità, vede raccolti oltre 250 esperti del mondo pubblico e privato e dei Big Tech ed è articolato in interventi singoli e tavoli di lavoro tematici dedicati ad una serie di importanti settori.

Il Presidente dell’Autorità Pasquale Stanzione introducendo i lavori ha indicato i “nuovi territori da esplorare, dai neurodiritti (tra i quali in primis l’habeas mentem, quale presupposto della libertà di autodeterminazione) alla gig economy: terreno sul quale il capitalismo delle piattaforme rischia di riprodurre nuove e più insidiose forme di caporalato”. “Ci interrogheremo – ha proseguito – sui mutamenti che hanno caratterizzato il ruolo del Garante nel nostro ordinamento e in quello europeo, nel rapporto con il legislatore e, rispettivamente, con gli altri organi europei, sino a ipotizzare un vero e proprio “manifesto” enunciante alcuni possibili obiettivi da perseguire nei prossimi anni”.

Nell’analizzare questi aspetti e nel delineare alcune possibili direttrici da seguire – ha concluso Stanzione – netta sarà la consapevolezza del fatto che, per quanto nuovi e diversi, rispetto a 25 anni fa, possano essere i contesti, identica resta l’esigenza di tutelare la persona rispetto ai rischi di un uso distorsivo della tecnica, con il suo eccesso di mezzi e la sua povertà di fini”.

I lavori sono proseguiti con Paola Pisano, consigliere per l’economia digitale e l’innovazione tecnologica del Ministero degli Affari esteri che ha sottolineato la collaborazione con il Garante in occasione della sperimentazione del voto elettronico per il rinnovo dei Comitati degli italiani all’estero e per il trasferimento di dati in Paesi terzi, collaborazione che ha significato maggiore tutela per i cittadini.

Il Garante europeo della protezione dati Wojciech Wiewiorowski si è soffermato sul valore dei dati ed ha concluso affermando “i dati non sono una merce, non sono il petrolio del futuro ma informazioni sugli esseri umani e come tali vanno tutelati”.

È intervenuto poi, Guido Scorza, componente dell’Autorità e curatore dell’evento, il quale ha condiviso con i presenti i suoi “sogni” sul prossimo futuro della protezione dati. “I have a dream – ha affermato Scorza – di trasformare in realtà la trasparenza effettiva dei trattamenti, trasparenza che porta alla consapevolezza, consapevolezza che significa libertà. Per tradurre in realtà il sogno sui diritti, serve un “nuovo patto sociale” che bilanci le nuove opportunità economiche con il rispetto della persona. Io sogno – ha concluso Scorza – di ridisegnare i servizi e i prodotti a misura di bambini”.

L’evento “State of Privacy ’22” si è concluso con la firma del “Manifesto di Pietrarsa”.

Il “Manifesto” chiede a chi vi aderirà l’impegno a promuovere azioni e risultati quantificabili in ambiti specifici quali trasparenza, consapevolezza, educazione e a mettere in atto attività attraverso il sito istituzionale del Manifesto di Pietrarsa.

Le iniziative mirano a rendere trasparenti, accessibili e comprensibili le informative sul trattamento dei dati personali, per passare da una trasparenza formale ad una trasparenza effettiva; ad accrescere il livello di consapevolezza delle persone al valore dei loro dati attraverso attività promozionali, campagne di informazione, giochi a premi, rubriche di attualità; a progettare percorsi di formazione, anche a distanza, rivolti a bambini e anziani sul valore dei dati personali e per accrescere la loro consapevolezza nell’uso dei dispositivi e dei servizi digitali.

Primi aderenti al Manifesto, dopo la firma del Presidente del Garante Pasquale Stanzione, Polizia postale, Guardia di Finanza, Università Roma Tre, Fondazione Telefono Azzurro, Iliad, Ferrovie dello Stato italiane, Arduino, RoadHouse, Meta, Mediaset, Google, Carioca, Enel, @LawLab (Luiss), Mondadori, McDonald’s, TikTok, Sky, il divulgatore scientifico Marco Camisani Calzolari che hanno illustrato le iniziative che intendono portare avanti nel segno del “Manifesto”.

https://www.key4biz.it/stanzione-garante-privacy-tutelare-la-persona-dalluso-distorsivo-della-tecnica/417126/




Artist finds private medical record photos in popular AI training data set

Censored medical images found in the LAION-5B data set used to train AI. The black bars and distortion have been added.
Enlarge / Censored medical images found in the LAION-5B data set used to train AI. The black bars and distortion have been added.
Ars Technica

Late last week, a California-based AI artist who goes by the name Lapine discovered private medical record photos taken by her doctor in 2013 referenced in the LAION-5B image set, which is a scrape of publicly available images on the web. AI researchers download a subset of that data to train AI image synthesis models such as Stable Diffusion and Google Imagen.

Lapine discovered her medical photos on a site called Have I Been Trained, which lets artists see if their work is in the LAION-5B data set. Instead of doing a text search on the site, Lapine uploaded a recent photo of herself using the site’s reverse image search feature. She was surprised to discover a set of two before-and-after medical photos of her face, which had only been authorized for private use by her doctor, as reflected in an authorization form Lapine tweeted and also provided to Ars.

Lapine has a genetic condition called Dyskeratosis Congenita. “It affects everything from my skin to my bones and teeth,” Lapine told Ars Technica in an interview. “In 2013, I underwent a small set of procedures to restore facial contours after having been through so many rounds of mouth and jaw surgeries. These pictures are from my last set of procedures with this surgeon.”

The surgeon who possessed the medical photos died of cancer in 2018, according to Lapine, and she suspects that they somehow left his practice’s custody after that. “It’s the digital equivalent of receiving stolen property,” says Lapine. “Someone stole the image from my deceased doctor’s files and it ended up somewhere online, and then it was scraped into this dataset.”

Lapine prefers to conceal her identity for medical privacy reasons. With records and photos provided by Lapine, Ars confirmed that there are medical images of her referenced in the LAION data set. During our search for Lapine’s photos, we also discovered thousands of similar patient medical record photos in the data set, each of which may have a similar questionable ethical or legal status, many of which have likely been integrated into popular image synthesis models that companies like Midjourney and Stability AI offer as a commercial service.

This does not mean that anyone can suddenly create an AI version of Lapine’s face (as the technology stands at the moment)—and her name is not linked to the photos—but it bothers her that private medical images have been baked into a product without any form of consent or recourse to remove them. “It’s bad enough to have a photo leaked, but now it’s part of a product,” says Lapine. “And this goes for anyone’s photos, medical record or not. And the future abuse potential is really high.”

Who watches the watchers?

LAION describes itself as a nonprofit organization with members worldwide, “aiming to make large-scale machine learning models, datasets and related code available to the general public.” Its data can be used in various projects, from facial recognition to computer vision to image synthesis.

For example, after an AI training process, some of the images in the LAION data set become the basis of Stable Diffusion’s amazing ability to generate images from text descriptions. Since LAION is a set of URLs pointing to images on the web, LAION does not host the images themselves. Instead, LAION says that researchers must download the images from various locations when they want to use them in a project.

The LAION data set is replete with potentially sensitive images collected from the Internet, such as these, which are now being integrated into commercial machine learning products. Black bars have been added by Ars for privacy purposes.
Enlarge / The LAION data set is replete with potentially sensitive images collected from the Internet, such as these, which are now being integrated into commercial machine learning products. Black bars have been added by Ars for privacy purposes.
Ars Technica

Under these conditions, responsibility for a particular image’s inclusion in the LAION set then becomes a fancy game of pass the buck. A friend of Lapine’s posed an open question on the #safety-and-privacy channel of LAION’s Discord server last Friday asking how to remove her images from the set. LAION engineer Romain Beaumont replied, “The best way to remove an image from the Internet is to ask for the hosting website to stop hosting it,” wrote Beaumont. “We are not hosting any of these images.”

In the US, scraping publicly available data from the Internet appears to be legal, as the results from a 2019 court case affirm. Is it mostly the deceased doctor’s fault, then? Or the site that hosts Lapine’s illicit images on the web?

Ars contacted LAION for comment on these questions but did not receive a response by press time. LAION’s website does provide a form where European citizens can request information removed from their database to comply with the EU’s GDPR laws, but only if a photo of a person is associated with a name in the image’s metadata. Thanks to services such as PimEyes, however, it has become trivial to associate someone’s face with names through other means.

Ultimately, Lapine understands how the chain of custody over her private images failed but still would like to see her images removed from the LAION data set. “I would like to have a way for anyone to ask to have their image removed from the data set without sacrificing personal information. Just because they scraped it from the web doesn’t mean it was supposed to be public information, or even on the web at all.”

https://arstechnica.com/?p=1882591




Telegram has a serious doxxing problem

Telegram has a serious doxxing problem
Eugene Mymrin / Getty Images

Burmese influencer Han Nyein Oo rose to prominence in 2020, posting memes and gossip about Burmese celebrities on Facebook to an audience that grew to several hundred thousand people in Myanmar by early 2021. Then, after the country’s military seized power that February, he lurched rightwards, becoming a full-blooded supporter of the junta, which has killed more than 1,500 people and arrested thousands more in bloody crackdowns.

He was soon banned from Facebook for violating its terms of service, so he moved to Telegram, the encrypted messaging app and social sharing platform. There, he posted messages of support for the military, graphic pictures of murdered civilians, and doctored pornographic images purporting to be female opposition figures. Often, these were cross-posted in other channels run by a network of pro-junta influencers, reaching tens of thousands of users.

This year, Han Nyein Oo moved on to direct threats. Opponents of the junta planned to mark the anniversary of the coup on February 1 with a “silent strike,” closing businesses and staying home to leave the streets abandoned. On his Telegram channel, Han Nyein Oo raged, asking his followers to send him photos of shops and businesses that were planning to shut. They obliged, and he began posting the images and addresses to his 100,000 followers. Dozens of premises were raided by police. Han Nyein Oo claimed credit. He did not respond to a request to comment.

“That was the start of the doxxing campaign,” says Wai Phyo Myint, a Burmese digital rights activist. “Since then there’s been an escalation.”

Over the past eight months, Han Nyein Oo’s Telegram channels, and those of other pro-coup figures, including self-defined journalist Thazin Oo and influencers Kyaw Swar and Sergeant Phoe Si, have doxxed hundreds of people that they accuse of siding with the resistance movement, from high-profile celebrities to small-business owners and students. Dozens have since been arrested or been killed in vigilante violence.

Han Nyein Oo’s channel was taken down in March after it was reported for breaching Telegram’s rules on disseminating pornography, but within days he had started another. It now has more than 70,000 followers.

Telegram’s doxxing problem goes far beyond Myanmar. WIRED spoke to activists and experts in the Middle East, Southeast Asia, and Eastern Europe who said that the platform has ignored their warnings about an epidemic of politically motivated doxxing, allowing dangerous content to proliferate, leading to intimidation, violence, and deaths.

In a Telegram message, company spokesperson Remi Vaughn said: “Since its launch, Telegram has actively moderated harmful content on its platform—including the publication of private information. Our moderators proactively monitor public parts of the app as well as accepting user reports in order to remove content that breaches our terms.”

Telegram, which now claims more than 700 million active users worldwide, has a publicly stated philosophy that private communications should be beyond the reach of governments. That has made it popular among people living under authoritarian regimes all over the world (and among conspiracy theorists, anti-vaxxers, and “sovereign citizens” in democratic countries).

But the service’s structure—part encrypted messaging app, part social media platform—and its almost complete lack of active moderation has made it “the perfect tool” for the kind of doxxing campaigns occurring in Myanmar, according to digital rights activist Victoire Rio.

This structure makes it easy for users to crowdsource attacks, posting a target for doxxing and encouraging their followers to dig up or share private information, which they can then broadcast more widely. Misinformation or doxxing content can move seamlessly from anonymous individual accounts to channels with thousands of users. Cross-posting is straightforward, so that channels can feed off one another, creating a kind of virality without algorithms that actively promote harmful content. “Structurally, it’s suited to this use case,” Rio says.

https://arstechnica.com/?p=1883287




Beloved browser extension acquired by non-beloved antivirus firm

Nearly every major site warns you about cookies, and it's a shame they share a moniker with one of life's great pleasures.
Enlarge / Nearly every major site warns you about cookies, and it’s a shame they share a moniker with one of life’s great pleasures.
Getty Images

Update 9:40am ET: Developer Daniel Kladnik writes Ars that he’s “getting a ton of congrats emails from all around the world … Online you can only find the hate speech, angry people are usually the loudest ones.” He adds that he is “aware of the problems Avast had (created) because of data collecting several years ago, and I can understand why some people are worried. The things is: data collecting is gone and, (as far as I know), Avast has changed for the better. It’s a fine company with good values and a plan for the future.”

Marina Ziegler, public relations director for Avast, writes Ars, “Many online users feel interrupted by web cookies, or even unsure about what they are, what they are used for, and how to adjust them.” Avast does not have “concrete plans for implementation across other products at the moment,” but has implemented cookie consent management into a beta version of Avast Online Security and Privacy.

“The extension does not collect any data,” Ziegler writes. “No user data is collected or sold. Also, we have no plans to change the extension’s way of working at this point. The acquisition was planned before the NortonLifeLock merger.”

Avast understands that “the news about our data processing for the purposes of trend analytics in early 2020 raised a number of questions,” Ziegler writes. Avast closed the business, ended the transfer of data, and has “taken further steps to improve data protection, collaborated with privacy organizations such as TOR and the Future of Privacy Forum, and have received certification of our data protection guidelines with TRUSTe,” Ziegler writes.

Original story follows:

Browser extension I don’t care about cookies does one job and does it well. It automatically removes the annoying-but-mandated “This website uses cookies” notices from websites. People like it, donate to it, and don’t ask more of it, a rare find for free software.

“Damn, this is the **** bro, it saved like 50 minutes of my gaming time lol,” reads one review on the tool’s Microsoft Edge Add-Ons page.

The tone changed when the solo developer posted “GREAT NEWS” on the extension’s website. Avast, a giant in cybersecurity that just completed an $8.1 billion merger with NortonLifeLock, will acquire the 10-year-old software for an undisclosed price.

“I am proud and happy to say that Avast … a famous and trustworthy IT company known for the wide range of products that help secure our digital experience, has recognized its value!,” developer Daniel Kladnik wrote recently. Kladnik wrote that he would keep working on the extension, it would remain free, and asked for donations to cease.

Commenters on Facebook, Twitter, and the extension’s various installation pages did not agree with Kladnik’s characterization of Avast. “Congratulations on killing the extension! Avast is cancer on this planet,” wrote a Facebook commenter. “The cure is now worse than the disease,” wrote another. “Sad to see a great pop-up blocking extension being acquired by a well-known pop-up creating company,” someone opined on the Chrome extension page.

There’s always a certain amount of knee-jerk “uninstalling” reaction to software acquisitions, but an extension that handles cookie policy for you being acquired by Avast does raise some questions. (We’ve reached out to both Kladnik and Avast and will update the post with new information).

Beyond the general scale and scope of the recently merged Norton/Avast entity (cleared just four days ago), and long before it, Avast has made news for its history of data management.

Avast closed down data brokerage operation Jumpshot in 2020 after a joint investigation by Vice and PCMag revealed that its antivirus programs were selling browsing data to some of the world’s largest companies, including Home Depot, Google, and Microsoft (and, disclosure, Ars Technica parent company Condé Nast). The data included Google searches, GPS coordinates on Google Maps, and searches on various sites, including YouTube and PornHub. Jumpshot touted itself as “the only company that unlocked walled garden data,” and, in a now-deleted tweet, touted its ability to collect “Every search. Every click. Every buy. On every site.”

In 2019, the creator of AdBlock Plus dug into Avast’s Online Security browser extension (and a similar one from AVG, which Avast acquired). The extension was sending extensive details about the pages visited, activity on those pages, and other data that made de-anonymizing people fairly easy. Google soon after removed Avast and AVG’s extensions from the Chrome Web Store.

In its mainline work providing security, Avast had one notable misstep in distributing a smaller software app it acquired. CCleaner, a tool for fully removing all the pieces of Windows software, was distributed by Avast while it contained malware. The malware, which allowed remote access and control with a seemingly legitimate signing certificate, was inserted by an attacker into CCleaner’s update servers through another firm that Avast acquired.

Avast, a Czech-based company in operation since 1988, has also contributed notable research and security discoveries for more than three decades. In recent years, Avast found 28 malware-infected browser extensions (in 2020), revealed a backdoor inside a federal agency (in 2021), and raised alarms about a Chrome vulnerability being used to target journalists and other specific targets.

Alternatives to “I don’t care about cookies” mentioned by sites and users include Consent-O-Matic and a number of other extensions that have nowhere near the same 10-year history or review build-up of Kladnik’s extension. You could, of course, keep using the extension and watch the updates closely.

https://arstechnica.com/?p=1882086




Il vostro robot “Roomba” vi sta spiando? Vi sono dei sospetti sulla società del gruppo Amazon

I tutori della privacy hanno criticato l’acquisto annunciato da Amazon del produttore di aspirapolvere robot iRobot per che aprirebbe le porte a una “sorveglianza pervasiva”, mentre la Federal Trade Commission ha aperto un’indagine sull’acquisizione da 1,7 miliardi di dollari.

L’acquisizione prevista dal gigante tecnologico del produttore di aspirapolvere Roomba gli darà accesso al sistema operativo dell’apparecchio che utilizza una telecamera frontale per creare mappe complete dell’interno delle case delle persone, che potranno poi essere inserite nell’enorme patrimonio di dati di Amazon su centinaia di milioni di consumatori.

Con questa acquisizione Amazon avrà accesso ad atti estremamente intimi nei nostri spazi più privati che non sono disponibili con altri mezzi o per altri concorrenti“, affermano oltre venti gruppi per la tutela della privacy e dei diritti civili in una lettera inviata venerdì alla FTC. “Le informazioni raccolte dai dispositivi di iRobot vanno oltre le planimetrie delle case e comprendono informazioni estremamente dettagliate sugli interni delle case dei consumatori e sugli orari e gli stili di vita degli abitanti”, si legge nella lettera, condivisa dall’associazione per i diritti digitali Fight for the Future. “Concedere ad Amazon l’accesso totale a questo tipo di informazioni private attraverso questa acquisizione danneggia i consumatori“.

Tra l’altro a questo punto i robot pulitori Roomba della iRobot entreranno a far parte degli strumenti controllati dal Alexa e facenti parte del suo sistema operativo, uno strumento che già raccoglie enormi quantità di informazioni degli utenti ascoltandone le voci. A questo punto Amazon avrà a propria disposizione anche una quantità enorme di informazioni di carattere fisico sulla abitazioni dei cittadini.

Siamo sicuri di voler veramente dare questo tipo di informazioni ad una singola multinazionale?


Telegram
Grazie al nostro canale Telegram potete rimanere aggiornati sulla pubblicazione di nuovi articoli di Scenari Economici.

⇒ Iscrivetevi subito


Minds

https://scenarieconomici.it/il-vostro-robot-roomba-vi-sta-spiando-vi-sono-dei-sospetti-sulla-societa-del-gruppo-amazon/




FTC sues data broker that tracks locations of 125M phones per month

Map pin flat on green cityscape and Huangpu River

The Federal Trade Commission on Monday sued a data broker for allegedly selling location data culled from hundreds of millions of phones that can be used to track the movements of people visiting abortion clinics, domestic abuse shelters, places of worship, and other sensitive places.

In a complaint, the agency said that Idaho-based Kochava has promoted its marketplace as providing “rich geo data spanning billions of devices globally.” The data broker has also said it “delivers raw latitude/longitude data with volumes around 94B+ geo transactions per month, 125 million monthly active users, and 35 million daily active users, on average observing more than 90 daily transactions per device.”

The FTC said Kochava amassed the data by tracking the Mobile Advertising ID, or MAID, from phones and selling the data through Amazon Web Services or other outlets without first anonymizing the data. Anyone who purchases the data can then use it to track the comings and goings of many phone owners. Many of the allegations are based on the agency’s analysis of a data sample the company made available for free to promote sales of its data, which was available online with no restrictions on usage.

“In fact, in just the data Kochava made available in the Kochava Data Sample, it is possible to identify a mobile device that visited a women’s reproductive health clinic and trace that mobile device to a single-family residence,” the complaint alleged. “The data set also reveals that the same mobile device was at a particular location at least three evenings in the same week, suggesting the mobile device user’s routine. The data may also be used to identify medical professionals who perform, or assist in the performance, of abortion services.”

The FTC went on to allege: “In addition, because Kochava’s data allows its customers to track consumers over time, the data could be used to identify consumers’ past conditions, such as homelessness. In fact, the Kochava Data Sample identifies a mobile device that appears to have spent the night at a temporary shelter whose mission is to provide residence for at-risk, pregnant young women or new mothers.”

Kochava officials released a statement that read:

This lawsuit shows the unfortunate reality that the FTC has a fundamental misunderstanding of Kochava’s data marketplace business and other data businesses. Kochava operates consistently and proactively in compliance with all rules and laws, including those specific to privacy.

Prior to the legal proceedings, Kochava took the proactive step of announcing a new capability to block geo data from sensitive locations via Privacy Block, effectively removing that data from the data marketplace, and is currently in the implementation process of adding that functionality. Absent specificity from the FTC, we are constantly monitoring and proactively adjusting our technology to block geo data from other sensitive locations. Kochava sources 100% of the geo data in our data marketplace from third party data brokers all of whom represent that the data comes from consenting consumers.

For the past several weeks, Kochava has worked to educate the FTC on the role of data, the process by which it is collected and the way it is used in digital advertising. We hoped to have productive conversations that led to effective solutions with the FTC about these complicated and important issues and are open to them in the future. Unfortunately the only outcome the FTC desired was a settlement that had no clear terms or resolutions and redefined the problem into a moving target. Real progress to improve data privacy for consumers will not be reached through flamboyant press releases and frivolous litigation. It’s disappointing that the agency continues to circumvent the lawmaking process and perpetuate misinformation surrounding data privacy.

Two weeks ago, the company sued the FTC in anticipation of Monday’s complaint, alleging its practices comply with all rules and laws and that the agency’s actions were an overreach.

Monday’s complaint said it was possible to access the Kochava data sample with minimal effort. “A purchaser could use an ordinary personal email address and describe the intended use simply as ‘business,'” FTC attorneys wrote. “The request would then be sent to Kochava for approval. Kochava has approved such requests in as little as 24 hours.”

The data sample consisted of a subset of the paid data feed that covered a rolling seven-day period. A single day’s worth of data contained more than 327,480,000 rows and 11 columns of data that pulled data from more than 61.8 million mobile devices.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a press release. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Allegations like those in the complaint raise a larger question about reining in location tracking by mobile devices. People should carefully review iOS and Android privacy settings to limit the apps’ access to location data. Both OSes also allow users to turn off ad personalization, a measure that can limit the use of some identifiers such as MAIDs. iOS further enables users to bar one app from tracking their activity across other apps.

None of these measures guarantees that location data won’t get swept up and sold by companies such as Kochava, but it’s a good practice to follow them anyway.

https://arstechnica.com/?p=1876747




DuckDuckGo now offers anti-tracking email service to everyone

DuckDuckGo's Email Protection, now available in public beta, gives you an email address that will strip trackers from emails and forward the rest to you.
Enlarge / DuckDuckGo’s Email Protection, now available in public beta, gives you an email address that will strip trackers from emails and forward the rest to you.
DuckDuckGo

DuckDuckGo’s tracker-removing email service, which has been available in private beta for a year, is now open to anyone who uses a DuckDuckGo mobile app, browser extension, or Mac browser. It has also added a few more privacy tools.

The service provides you a duck.com email address, one intended to be given out for the kind of “Subscribe to our newsletter for 20 percent off” emails you know exist only to harvest data and target you for ads. Email sent to your duck.com address forwards to your chosen primary email—but with trackers removed.

Email Protection now also fixes up links, strips them of tracking modifiers, upgrades unencrypted HTTP URLs to HTTPS where possible, and, for the rare necessary reply, allows you to send directly from your duck address instead of exposing your primary email. During their closed beta, DuckDuckGo claims that 85 percent of the emails it processed contained hidden trackers.

To sign up for Email Protection, you’ll need to use either the DuckDuckGo mobile app for iOS or Android, use DuckDuckGo’s browser extension on Firefox, Chrome, Edge, or Brave, or use its beta Mac browser (the list for which must be joined in the DuckDuckGo mobile app).

In my experience, using the company’s apps, extensions, or browser isn’t necessary to keep the email forwarding service running, but they allow you to autofill your duck address and create more individual throwaway email addresses, which is handy for email filtering.

DuckDuckGo's Email Protection service works to remove trackers commonly embedded in commercial emails that send back the time opened, location, and device used, among other data.
Enlarge / DuckDuckGo’s Email Protection service works to remove trackers commonly embedded in commercial emails that send back the time opened, location, and device used, among other data.
DuckDuckGo

DuckDuckGo notes that the trackers tucked into email images and links can pass information back to the sender about when you opened a message, your geolocation when opening it, and which device you were using. Knowing your primary email address can also allow companies to connect it to Facebook and Google and target you for advertising across sites.

The company helpfully notes that it will not track you using its anti-tracking service. “When your Duck Addresses receive an email, we immediately apply our tracking protections and then forward it to you, never saving it on our systems. Sender information, subject lines…we don’t track any of it,” the company writes on its blog post. DuckDuckGo is also “committed to Email Protection for the long term,” and states that it worked during the closed beta to support millions of users.

Listing image by DuckDuckGo

https://arstechnica.com/?p=1876093




Amazon studio plans lighthearted show of Ring surveillance footage

Amazon's combining its endless reach with its constant surveillance—but for laughs.
Enlarge / Amazon’s combining its endless reach with its constant surveillance—but for laughs.
Getty Images

For some people, the term “Ring Nation” might evoke a warrantless surveillance dystopia overseen by an omnipotent megacorp. To Amazon-owned MGM, Ring Nation is a clip show hosted by comedian Wanda Sykes, featuring dancing delivery people and adorable pets.

Deadline reports that the show, due to debut on September 26, is “the latest example of corporate synergy at Amazon.” Amazon owns household video security brand Ring, Hollywood studio MGM, and Big Fish, the producer of Ring Nation

Viral videos captured by doorbell cameras have been hot for a while now. You can catch them on late-night talk shows, the r/CaughtOnRing subreddit, and on millions of TikTok users’ For You page. Amazon’s media properties, perhaps sensing an opportunity to capitalize and soften Ring’s image, are sallying forth with an officially branded offering.

Ring Nation will feature “neighbors saving neighbors, marriage proposals, military reunions and silly animals,” Deadline writes. But Ring Nation might be aiming even higher, according to Ring founder Jamie Siminoff—to something approaching a salve for our deeply divided nation.

“Bringing the new community together is core to our mission at Ring, and Ring Nation gives friends and family a fun new way to enjoy time with one another,” Siminoff told Deadline. “We’re so excited to have Wanda Sykes join Ring Nation to share people’s memorable moments with viewers.”

Ring sharing its owners’ moments with other viewers has been a contentious issue. The surveillance company recently admitted giving police access to Ring recordings without user consent for “emergency” requests. Ring partnered with more than 600 law enforcement agencies to encourage Ring installations in communities and ease police access to footage. Ring has since made it easier to opt out of police requests.

Ring owners already had a lower-key way to share clips, mostly of crime or purported crime, through the Neighbors app. Doing so essentially reveals your location at a granular level. Responding to Congressional inquiries in the past, Ring cited users as being responsible for violating their neighbors’ privacy, noting that it includes stickers and signs that comply with some states’ public recording policies.

Ring Nation producer Big Fish is also, incidentally, the producer of Live PD, the police ride-along series canceled after the George Floyd killing and resulting protests, but the show was recently revived.

https://arstechnica.com/?p=1873311




FTC aims to counter the “massive scale” of online data collection

FTC Chair Lina Khan said the commission intends to act on commercial data collection, which happens at "a massive scale and in a stunning array of contexts."
Enlarge / FTC Chair Lina Khan said the commission intends to act on commercial data collection, which happens at “a massive scale and in a stunning array of contexts.”
Getty Images

The Federal Trade Commission has kicked off the rulemaking process for privacy regulations that could restrict online surveillance and punish bad data-security practices. It’s a move that some privacy advocates say is long overdue, as similar Congressional efforts face endless uncertainty.

The Advanced Notice of Proposed Rulemaking, approved on a 3-2 vote along partisan lines, was spurred by commercial data collection, which occurs at “a massive scale and in a stunning array of contexts,” FTC Chair Lina M. Khan said in a press release. Companies surveil online activity, friend networks, browsing and purchase history, location data, and other details; analyze it with opaque algorithms; and sell it through “the massive, opaque market for consumer data,” Khan said.

Companies can also fail to secure that data or use it to make services addictive to children. They can also potentially discriminate against customers based on legally protected statuses like race, gender, religion, and age, the FTC said. What’s more, the release said, some companies make taking part in their “commercial surveillance” required for service or charge a premium to avoid it, employing dark patterns to keep the systems in place.

One of the three votes for the rulemaking came from Alvaro Bedoya, the founding director of Georgetown Law’s Center on Privacy & Technology, who was appointed three months ago. Bedoya, who focused on unregulated facial recognition, has described privacy as a civil right.

This marks the first step in what could be a years-long, multi-administration effort by the FTC to act under its existing authority to regulate and enforce privacy protections. The Industry Association of Privacy Professionals said that, should the FTC work under its Magnuson-Moss, or “warranty,” authority, it faces “a lengthy process that can take several years to complete.” The FTC gave no indication of a timeline for enacting a final rule, though it’s unlikely to come before the end of this year.

Both critics and supporters want to see Congress act, too

The FTC is seeking input from stakeholders and the public, starting with a public forum on September 8. Before the announcement was a day old, however, many organizations issued statements on the move.

Consumer Reports (CR) had effusive praise for the regulatory agency seeking to do some regulating. “While we continue to support federal and state efforts to enact and enforce comprehensive privacy legislation, consumers can wait no longer to have their personal information protected,” Justin Brookman, director of technology policy at CR, said in a release. “The FTC has the authority to issue clear rules to safeguard personal data by holding companies accountable for unwanted data collection, use, and disclosure.”

CR was one of more than 40 organizations that urged the FTC to act on data privacy in 2021, including the American Civil Liberties Union, the Electronic Frontier Foundation, and Public Knowledge.

Energy and Commerce Chairman Frank Pallone, Jr. (D-NJ) said in a release that while the lack of online privacy in the US is “completely unacceptable,” the FTC shouldn’t be stuck with only its existing authority to deal with it. Pallone pushed his American Data Privacy and Protection Act, which would broaden the FTC’s authority to create and enforce data protections. That bill has cleared a key House committee, but it awaits an uncertain floor vote and even tougher Senate future.

Senator Edward J. Markey (D-Mass.), meanwhile, believes that his own Children and Teens’ Online Privacy Protection Act is the best, if narrowest, way forward. The bill would create a Youth Privacy and Marketing Division at the FTC that focuses on protections for those 16 and younger.

Both Republican commissioners on the FTC would be fine with deferring privacy to Congress. Christine Wilson and Noah Phillips cited privacy laws that are seemingly moving ahead and the vast implications of any FTC action. Phillips said an FTC privacy action “will impact many thousands of companies, millions of citizens, and billions upon billions of dollars in commerce.”

https://arstechnica.com/?p=1873142




Microsoft trackers run afoul of DuckDuckGo, get added to blocklist

Microsoft trackers run afoul of DuckDuckGo, get added to blocklist
Aurich Lawson

DuckDuckGo, the privacy-minded search company, says it will block trackers from Microsoft in its desktop web browser, following revelations in May that certain scripts from Bing and LinkedIn were getting a pass.

In a blog post, DuckDuckGo founder Gabriel Weinberg says that he’s heard users’ concerns since security researcher Zach Edwards’ thread that “we didn’t meet their expectations around one of our browser’s web tracking protections.” Weinberg says that, over the next week, the company’s browser will add Microsoft to the list of third-party tracking scripts blocked by its mobile and desktop browsers, as well as extensions for other browsers.

“Previously, we were limited in how we could apply our 3rd-Party Tracker Loading Protection on Microsoft tracking scripts due to a policy requirement related to our use of Bing as a source for our private search results,” Weinberg writes. “We’re glad this is no longer the case. We have not had, and do not have, any similar limitation with any other company.”

There are a lot of pervasive, identifying things that load up on most modern webpages. At issue in DuckDuckGo’s apps was its default blocking of scripts from companies like Facebook and Google loading on third-party websites. DuckDuckGo, which uses Microsoft’s Bing as one of its sources for search results, had to allow some of Microsoft’s trackers to load “due to a policy requirement.” In a Reddit response at the time of the revelation, Weinberg noted that Microsoft’s trackers were still blocked in most ways, like utilizing third-party cookies for fingerprinting visitors.

There’s more to the delicate dance between DuckDuckGo and Microsoft than just trackers, however. Microsoft also provides ads that run on DuckDuckGo’s search results. To allow advertisers to see when someone has clicked an ad on DuckDuckGo and arrived at their page, the DuckDuckGo apps won’t block requests from bat.bing.com. Weinberg notes that you can avoid this by turning off ads in DuckDuckGo search entirely. The company is working on validating ads in ways that can be non-tracking, Weinberg writes, akin to similar efforts by Safari and Firefox.

Finally, DuckDuckGo aims to be more open about its tracker blocking. The company committed its tracker blocklist to a public GitHub repository yesterday and published a new help doc on its tracking protections.

It can look like a lot of work over two scripts, but then DuckDuckGo lives inside the tricky balance of trying to make its search product convenient and relevant while offering its users as much privacy as websites can stand before breaking. And the 15-year-old company from Paoli, Pennsylvania, can’t just leave Bing behind entirely. Weinberg noted in his May Reddit response that most of its traditional search results and images come from Bing. “Really only two companies (Google and Microsoft) have a high-quality global web link index” due to the billion-dollar cost, Weinberg wrote. Every company that wants to provide search to the world faces either a duopoly or a very long journey.

Microsoft, meanwhile, continues to expand its advertising markets, most recently to Netflix, and, potentially, into its own operating system. Its advertising revenue was $3 billion for the quarter ending June 30, an increase of 15 percent year over year but the lowest growth rate in more than a year.

https://arstechnica.com/?p=1871913