Fake AI browser extensions stole data from 900k users, Microsoft finds

Fake AI browser extensions stole data from 900k users, Microsoft finds


Hundreds of thousands of users have installed malicious browser extensions that impersonated legitimate AI assistant tools to harvest chat histories and browsing data, Microsoft has revealed.

According to an official report from Microsoft Defender, the malicious Chromium-based extensions reached approximately 900,000 installs. The campaign also affected more than 20,000 enterprise tenants, where employees frequently interact with AI tools using sensitive information.

The extensions collected full URLs and AI chat content from platforms including ChatGPT and DeepSeek. This exposed organisations to potential leaks of proprietary code, internal workflows, strategic discussions, and other confidential data, Microsoft said.

How the attack worked

The threat actor published look-alike AI assistant extensions in the Chrome Web Store, using branding and descriptions that resembled legitimate productivity tools such as AITOPIA. Because Microsoft Edge supports Chrome Web Store extensions, the same listings could reach users across both browsers.

Once installed, the extensions operated continuously within the browser context. They harvested AI chat content and browsing telemetry directly from active sessions, staging the data locally before exfiltration.

The extensions maintained communication with attacker-controlled infrastructure using standard web protocols, making the activity difficult to distinguish from normal browser traffic. At regular intervals, data was sent via HTTPS POST requests to domains including deepaichats[.]com and chatsaigpt[.]com. After transmission, local buffers were cleared to reduce forensic visibility.

Telemetry enabled by default after updates

Microsoft noted that a misleading consent mechanism enabled ongoing data collection. Although users could initially disable telemetry, subsequent updates automatically re-enabled it without clear user awareness.

The extensions logged nearly all visited URLs, including internal sites, along with chat snippets, model names, and a persistent identifier. The code included minimal filtering and weak consent handling, Microsoft’s analysis found.

Scale of the campaign

The threat actor targeted the growing ecosystem of AI-assistant browser extensions, capitalising on the fact that many knowledge workers install sidebar tools to interact with models such as ChatGPT and DeepSeek. These extensions often require broad page-level permissions for convenience.

In some cases, agentic browsers automatically downloaded the extensions without explicit user approval, reflecting how convincing the names and descriptions appeared, Microsoft said.

Mitigation guidance

Microsoft advised organisations to monitor network traffic to the known endpoints, including *.chatsaigpt.com and *.deepaichats.com. It recommended auditing browser extensions using Microsoft Defender Vulnerability Management, enabling SmartScreen and Network Protection, and establishing organisational policies on AI use.

Users were also advised to review their installed extensions and remove any unknown or unverified tools.



Source link

Leave a Reply