Google quantum-proofs HTTPS by squeezing 2.5kB of data into 64-byte space

Google and other browser makers require that all TLS certificates be published in public transparency logs, which are append-only distributed ledgers. Website owners can then check the logs in real time to ensure that no rogue certificates have been issued for the domains they use. The transparency programs were implemented in response to the 2011 hack of Netherlands-based DigiNotar, which allowed the minting of 500 counterfeit certificates for Google and other websites, some of which were used to spy on web users in Iran.

Once viable, Shor’s algorithm could be used to forge classical encryption signatures and break classical encryption public keys of the certificate logs. Ultimately, an attacker could forge signed certificate timestamps used to prove to a browser or operating system that a certificate has been registered when it hasn’t.

To rule out this possibility, Google is adding cryptographic material from quantum-resistant algorithms such as ML-DSA. This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022.

The MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now, Westerbaan said.

The new system has already been implemented in Chrome. For the time being, Cloudflare is enrolling roughly 1,000 TLS certificates to test how well the MTCs work. For now, Cloudflare is generating the distributed ledger. The plan is for CAs to eventually fill that role. The Internet Engineering Task Force standards body has recently formed a working group called the PKI, Logs, And Tree Signatures, which is coordinating with other key players to develop a long-term solution.

“We view the adoption of MTCs and a quantum-resistant root store as a critical opportunity to ensure the robustness of the foundation of today’s ecosystem,” Google’s Friday blog post said. “By designing for the specific demands of a modern, agile internet, we can accelerate the adoption of post-quantum resilience for all web users.”

https://arstechnica.com/security/2026/02/google-is-using-clever-math-to-quantum-proof-https-certificates/




Google reveals Nano Banana 2 AI image model, coming to Gemini today

With Nano Banana 2, Google promises consistency for up to five characters at a time, along with accurate rendering of as many as 14 different objects per workflow. This, along with richer textures and “vibrant” lighting will aid in visual storytelling with Nano Banana 2. Google is also expanding the range of available aspect ratios and resolutions, from 512px square up to 4K widescreen.

So what can you do with Nano Banana 2? Google has provided some example images with associated prompts. These are, of course, handpicked images, but Nano Banana has been a popular image model for good reason. This degree of improvement seems believable based on past iterations of Nano Banana.

Google AI infographic

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process.

Credit: Google

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process. Credit: Google

AI museum comparison

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9.

Credit: Google

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9. Credit: Google

AI farm image

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items.

Credit: Google

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items. Credit: Google

Google must be pretty confident in this model’s capabilities because it will be the only one available going forward. Starting now, Nano Banana 2 will replace both the standard and Pro variants of Nano Banana across the Gemini app, search, AI Studio, Vertex AI, and Flow.

In the Gemini app and on the website, Nano Banana 2 will be the image generator for the Fast, Thinking, and Pro settings. It’s possible there will eventually be a Nano Banana 2 Pro—Google tends to release elements of new model families one at a time. For now, it’s all “Flash” Image.

https://arstechnica.com/ai/2026/02/google-releases-nano-banana-2-ai-image-generator-promises-pro-results-with-flash-speed/




Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving

Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities.

Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent.

Gemini 3.1 Pro benchmarks

Credit: Google

Credit: Google

Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent.

Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are.

https://arstechnica.com/google/2026/02/google-announces-gemini-3-1-pro-says-its-better-at-complex-problem-solving/




Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today

Sour notes

AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée.

Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags.

Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content.

Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear.

https://arstechnica.com/google/2026/02/gemini-can-now-generate-ai-music-for-you-no-lyrics-required/




Google’s Pixel 10a arrives on March 5 for $499 with specs and design of yesteryear

It’s that time of year—a new budget Pixel phone is about to hit virtual shelves. The Pixel 10a will be available on March 5, and pre-orders go live today. The 9a will still be on sale for a while, but the 10a will be headlining Google’s store. However, you might not notice unless you keep up with the Pixel numbering scheme. This year’s A-series Pixel is virtually identical to last year’s, both inside and out.

Last year’s Pixel 9a was a notable departure from the older design language, but Google made few changes for 2026. We liked that the Pixel 9a emphasized battery capacity and moved to a flat camera bump, and this time, it’s really flat. Google says the camera now sits totally flush with the back panel. This is probably the only change you’ll be able to identify visually.

Specs at a glance: Google Pixel 9a vs. Pixel 10a
Phone Pixel 9a Pixel 10a
SoC Google Tensor G4 Google Tensor G4
Memory 8GB 8GB
Storage 128GB, 256GB 128GB, 256GB
Display 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 3, 2700 nits (peak) 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 7i, 3000 nits (peak)
Cameras 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2
Software Android 15 (at launch), 7 years of OS updates Android 16, 7 years of OS updates
Battery 5,100 mAh, 23 W wired charging, 7.5 W wireless charging 5,100 mAh, 30 W wired charging, 10 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.3, sub-6 GHz 5G, USB-C 3.2 Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz 5G, USB-C 3.2
Measurements 154.7×73.3×8.9 mm; 185 g 153.9×73×9 mm; 183 g

Google also says the new Pixel will have a slightly upgraded screen. The resolution, size, and refresh rate are unchanged, but peak brightness has been bumped from 2,700 nits to 3,000 nits (the same as the base model Pixel 10). Plus, the cover glass has finally moved beyond Gorilla Glass 3 to Gorilla Glass 7i, which supposedly has improved scratch and drop protection.

Pixel 10a in Berry

Credit: Google

Credit: Google

Google notes that more of the phone is constructed from recycled material, 100 percent for the aluminum frame and 81 percent for the plastic back. There’s also recycled gold, tungsten, cobalt, and copper inside, amounting to about 36 percent of the phone’s weight. The phone also continues to have a physical SIM slot, which was removed from the Pixel 10 series last year. The device’s USB-C 3.2 port can also charge slightly faster than the 9a (30 W versus 23 W), and wireless charging has gone from 7.5 W to 10 W. There are no Qi2 magnets inside, though.

Internally, the Pixel 10a is even more like its predecessor. Unlike past A-series phones, this one doesn’t have the latest Tensor chip—it’s sticking with the same Tensor G4 from the 9a. That’s a bummer, as the G5 was a bigger leap than most of Google’s chip upgrades. The company says it stuck with the G4 to “balance affordability and performance.”

https://arstechnica.com/gadgets/2026/02/googles-pixel-10a-arrives-on-march-5-for-499-with-specs-and-design-of-yesteryear/




AI Mode, Lasorella (Agcom): “Stiamo per fare segnalazione all’Ue”. Ciulli (Google): “Non toglie lettori ai giornali”

“Ci è stato posto il tema del rapporto tra stampa e AI Mode di Google, lo ha già fatto l’Autorità tedesca e noi stiamo per fare una segnalazione alla Commissione europea, è un caso evidente di impatto sull’informazione”. Così Giacomo Lasorella, presidente Agcom, al convegno Epistemia e Intelligenza artificiale dell’Università La Sapienza a Roma.

AI Mode, Lasorella (Agcom): Rischio di non leggere più i giornali

“Andando a cercare in AI Mode il rischio è di non leggere più i giornali – ha aggiunto -. C’è il rischio di compressione della libertà informativa e del diritto dei cittadini di accedere a più fonti di informazioni sancito dall’articolo 3 dell’European Freedom Act”.

“Questo è solo uno dei casi visti dal regolatore – ha sottolineato il presidente Agcom – in cui la disciplina impatta sui servizi. Noi stiamo cercando di affrontare questi temi in modo efficace insieme alla Commissione e agli altri regolatori europei ma abbiamo bisogno del supporto delle conoscenze che ci porta il mondo scientifico, la complessità attuale richiede dialogo costante tra istituzioni e ricerca”.

Lasorella (Agcom): Dsa ha ruolo centrale

Lasorella ha poi detto che il Dsa europeo, la normativa Ue che regola le piattaforme digitali ha assunto “un ruolo centrale” e rappresenta ad oggi “l’unico presidio a livello mondiale rispetto ad un mondo in trasformazione”.

“Guardando il bicchiere mezzo pieno tanto si sta facendo – ha concluso Lasorella – Ma come visione di sistema mi permetto di dire che c’è bisogno di immaginare un ruolo più ampio rispetto alle singole autorità nazionali, dal momento che il rapporto tra Commissione Ue e i paesi di origine delle piattaforme digitali comprime il ruolo delle autorità nazionali”.

Ciulli (Google), Con l’AI stiamo migliorando la ricerca non l’accesso all’informazione

“Non credo che per AI Overview e AI Mode le persone smettano di leggere i giornali, sarebbe preoccupante se lo facessero. Sono una naturale evoluzione del motore di ricerca, stiamo migliorando il modo precedente di fare ricerca non l’accesso all’informazione”. Lo ha detto Diego Ciulli, Head of Government Affairs and Public Policy, di Google in Italia, sempre al convegno Epistemia.

“Il motore di ricerca prova a ordinare fonti autorevoli e oltre ad una sintesi diamo i link di approfondimento a quella fonte – ha aggiunto – questa è la differenza tra un motore di ricerca e un chatbot. AI Mode resta un motore di ricerca, mentre Gemini è il vostro assistente per fare lavori creativi, confondere le due cose è come mettere i piatti nella lavatrice”.

“AI Overview e AI Mode sono molto bravi a dare dei fatti in maniera rapida non fanno nessun tipo di analisi e di approfondimento, non credo lo faranno nemmeno nel prossimo futuro – ha concluso Ciulli -. Anzi quello che stiamo vedendo è che le persone cercano di più, anche in Europa, da quando ci sono AI Overview e AI Mode e questo inevitabilmente porta più traffico fuori da Google. Inoltre si stanno sviluppando nuovi linguaggi, perché le persone vogliono sapere di più e ascoltano i podcast di due ore. Questo è successo grazie a questi strumenti”. 

Il dibattito è aperto e coinvolge in prima persona gli editori ma anche il Governo, nella persona di Alberto Barachini, sottosegretario per l’Informazione e l’Editoria già critico nei confronti della nuova funzionalità.

Leggi le altre notizie sull’home page di Key4biz

https://www.key4biz.it/ai-mode-lasorella-agcom-stiamo-per-fare-segnalazione-allue-ciulli-google-non-toglie-lettori-ai-giornali/565806/




The first Android 17 beta is now available on Pixel devices

In short, the first Android 17 beta is chock full of things that may interest developers and modders, but there’s little in the way of user-facing changes right now.

Android 17 release schedule

Google has made some notable changes to how it releases Android updates, and Android 17 continues the trend. Like last year, there will be two Android 17 releases in 2026. The first one, coming in Q2, will be the more significant of the two. It will include a raft of new APIs, behavioral changes, and feature updates. This split release setup was implemented to better align with when major OEMs release new devices, but Android 17 availability still focuses mainly on Pixels. Google’s phones receive immediate updates, but everyone else has to wait for OEMs to roll out updates over the following weeks or months.

At the end of the year, another version (you can think of it as Android 17.1 even though Google doesn’t give it a name) will become available on supported devices. This “minor SDK release” will include some API and feature changes, but Google doesn’t have any details at this time.

Android release schedule

Credit: Google

Credit: Google

Before we get to that, Google plans to launch a second beta release in March. The company says Beta 2 will include final APIs, allowing developers to complete testing and roll out updates. Developers will have “several months” to get that work done before the final version hits Pixels.

In 2025, Google also changed the way it updates the open source parts of Android. Rather than regular code dumps, Google now only updates the Android Open Source Project (AOSP) twice yearly, in the second and fourth quarters, when new versions are released. That makes it harder to know what to expect from upcoming versions of Android, but Google insists this is more efficient.

If you want to check out Android 17 today, you’ll need a Pixel device. It supports the Pixel 6, Pixel 7, Pixel 8, Pixel 9, and Pixel 10 generations. The Pixel tablet and original Pixel Fold are also included. Other phone makers may release beta builds in the weeks ahead, but it’s a Google-only event for now. You can opt in to get an OTA to Android 17 on the beta program website.

https://arstechnica.com/gadgets/2026/02/the-first-android-17-beta-is-now-available-on-pixel-devices/




Platforms bend over backward to help DHS censor ICE critics, advocates say

“The nature and content of the Defendants’ communications with these technology companies” is “critical for determining whether they crossed the line from governmental cajoling to unconstitutional coercion,” EFF’s complaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is confident it can win the fight to expose government demands, but like most FOIA lawsuits, the case is expected to move slowly. That’s unfortunate, he said, because ICE activity is escalating, and delays in addressing these concerns could irreparably harm speech at a pivotal moment.

Like users, platforms are seemingly victims, too, FIRE senior attorney Colin McDonnell told Ars.

They’ve been forced to override their own editorial judgment while navigating implicit threats from the government, he said.

“If Attorney General Bondi demands that they remove speech, the platform is going to feel like they have to comply; they don’t have a choice,” McDonnell said.

But platforms do have a choice and could be doing more to protect users, the EFF has said. Platforms could even serve as a first line of defense, requiring officials to get a court order before complying with any requests.

Platforms may now have good reason to push back against government requests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to address the ICEBlock removal and FOIA lawsuits, the government has quickly withdrawn requests to unmask Facebook users soon after litigation began.

“That’s like an acknowledgement that the Trump administration, when actually challenged in court, wasn’t even willing to defend itself,” Trujillo said.

Platforms could view that as evidence that government pressure only works when platforms fail to put up a bare-minimum fight, Trujillo said.

https://arstechnica.com/tech-policy/2026/02/platforms-bend-over-backward-to-help-dhs-censor-ice-critics-advocates-say/




It took two years, but Google released a YouTube app on Vision Pro

When Apple’s Vision Pro mixed reality headset launched in February 2024, users were frustrated at the lack of a proper YouTube app—a significant disappointment given the device’s focus on video content consumption, and YouTube’s strong library of immersive VR and 360 videos. That complaint continued through the release of the second-generation Vision Pro last year, including in our review.

Now, two years later, an official YouTube app from Google has launched on the Vision Pro’s app store. It’s not just a port of the iPad app, either—it has panels arranged spatially in front of the user as you’d expect, and it supports 3D videos, as well as 360- and 180-degree ones.

YouTube’s App Store listing says users can watch “every video on YouTube” (there’s a screenshot of a special interface for Shorts vertical videos, for example) and that they get “the full signed-in experience” with watch history and so on.

Shortly after the Vision Pro launched, many users complained to YouTube about the lack of an app. They were referred to the web interface—which worked OK for most 2D videos, but it obviously wasn’t an ideal experience—and were told that a Vision Pro app was on the roadmap.

Two years of silence followed. Third-party apps popped up, like the relatively popular Juno app, but it was pulled from the App Store on Google’s claim that it violated API policies. (Some others remained or became available later.)

Google is building out its own XR ambitions, so it’s possible the Vision Pro app benefited from some of that work, but it’s unclear how this all came to be. But it’s here now. Next up: Netflix, right? Sadly, that’s unlikely; unlike Google, Netflix has not announced any intention here.

https://arstechnica.com/gadgets/2026/02/it-took-two-years-but-google-released-a-youtube-app-on-vision-pro/




Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

On Thursday, Google announced that “commercially motivated” actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity “model extraction” and considers it intellectual property theft, which is a somewhat loaded position, given that Google’s LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google’s Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI’s terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Even so, Google’s terms of service forbid people from extracting data from its AI models this way, and the report is a window into the world of somewhat shady AI model-cloning tactics. The company believes the culprits are mostly private companies and researchers looking for a competitive edge, and said the attacks have come from around the world. Google declined to name suspects.

The deal with distillation

Typically, the industry calls this practice of training a new model on a previous model’s outputs “distillation,” and it works like this: If you want to build your own large language model (LLM) but lack the billions of dollars and years of work that Google spent training Gemini, you can use a previously trained LLM as a shortcut.

https://arstechnica.com/ai/2026/02/attackers-prompted-gemini-over-100000-times-while-trying-to-clone-it-google-says/