Google Chrome Secretly Installs 4GB AI Model, Gemini Nano, On Devices
Google Chrome has reportedly installed a 4GB AI model, Gemini Nano, onto users' devices without explicit consent or notification, raising privacy concerns.

Google Chrome has begun installing a significant 4GB artificial intelligence model, known as Gemini Nano, onto some users' devices without their explicit permission or awareness, according to reports. Gemini Nano is designed to run AI tasks directly on local hardware like smartphones and laptops, rather than relying on cloud servers. This development was highlighted by Alexander Hanff, a Swedish computer scientist and privacy advocate also known as That Privacy Guy, who stated that Google does not inform users after the AI model has been installed.
The installation of Gemini Nano reportedly only occurs on devices that meet specific hardware requirements. It remains unclear how many individuals have had the model installed on their systems. Gemini Nano is capable of performing various on-device functions, such as identifying potential scam phone calls, assisting with text message composition, summarizing audio recordings, and analyzing screenshots from Pixel phones. It is distinct from the "AI Mode" feature accessible via the address bar, where queries are processed on Google's Gemini servers.
A spokesperson for Google addressed the matter, confirming that Gemini Nano will be automatically removed if a device lacks sufficient resources, including processing power, RAM, storage space, or network bandwidth. "In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings," the spokesperson told CNET. "Once disabled, the model will no longer download or update." Google has provided additional information regarding its on-device generative AI models within Chrome on a dedicated web page.
Locating and Removing Gemini Nano
For users concerned about whether Gemini Nano has been installed on their Chrome browser, a manual check is possible. On Windows, users can navigate to "File Explorer"; on Chromebooks, "Files"; and on Macs, "Finder." Within these file management tools, searching for a folder named "OptGuideOnDeviceModel" will reveal its presence. Inside this folder, a file titled "weights.bin" indicates that Gemini Nano is installed. Hanff emphasized that Chrome users are unlikely to know about the presence of Gemini Nano unless they actively search for it, as the browser did not request consent and does not display the model's installation.
Removing Gemini Nano can be achieved through a couple of methods. The most straightforward, albeit drastic, approach is to uninstall Google Chrome entirely. Alternatively, users can access Chrome's advanced configuration settings by typing "chrome://flags" into the browser's address bar. From there, locating the option labeled "Enables optimization guide on device" and setting it to "disabled" will prevent the model from downloading or updating.
The implications of this unannounced installation are significant. Hanff suggests that Google's move may be a strategic effort to reduce its own operational costs by shifting AI processing from its servers to users' personal computers. "Running inference on users' own hardware allows them to push 'AI features' without the compute costs," Hanff explained. This strategy could potentially lead to substantial savings for the tech giant, especially as the demand for AI-powered features continues to grow.
Furthermore, Hanff raised concerns about potential legal ramifications, particularly within the European Union. He posited that the unconsented installation of Gemini Nano could potentially contravene the principles of lawfulness, fairness, and transparency mandated by the EU's General Data Protection Regulation (GDPR). Hanff also suggested that, given the potential environmental impact of widespread AI model deployments, Google should have disclosed this initiative under regulations like the Corporate Sustainability Reporting Directive. "Google has given us every reason not to trust them with a history spanning two decades of global privacy violations at massive scale," Hanff stated. "So, I suspect they figured asking permission (what the law requires) would hinder their ability to push this model and, of course, whatever comes after it." The lack of transparency surrounding the installation of this AI model highlights ongoing debates about user consent, data privacy, and the ethical deployment of artificial intelligence technologies by major tech corporations.
