Critical AI Browser Risk: Claude Extension Flaw Enabled Zero-Click Prompt Injection

 

Security researchers have uncovered a critical vulnerability in Claude Chrome Extension developed by Anthropic, exposing users to zero-click attacks that could silently hijack the AI assistant.

The flaw allowed attackers to inject malicious prompts into the extension without any user interaction, simply by visiting a compromised or malicious website.

How the Attack Worked

The vulnerability, dubbed ShadowPrompt, was not a single bug but a chain of weaknesses that together created a powerful exploit path.

At its core, the issue relied on two key problems:

  • The extension trusted any subdomain under *.claude.ai
  • A cross-site scripting (XSS) vulnerability existed in a trusted subdomain component

By combining these, attackers could deliver malicious instructions that the extension would treat as legitimate user input.

This meant that simply visiting a webpage could trigger hidden background code that sends commands directly to the AI assistant without clicks, prompts, or visible interaction.

Zero-Click Prompt Injection Explained

Unlike traditional attacks that require phishing or user action, this vulnerability enabled zero-click prompt injection.

In practice, attackers could:

  • Inject hidden prompts into the Claude assistant
  • Make the extension execute actions as if requested by the user
  • Extract sensitive data such as API keys, credentials, or chat history
  • Manipulate browser activity or automate unintended actions

This fundamentally breaks the trust model of AI assistants, where the system assumes all received prompts originate from the user.

Why This Is a Serious Threat

What makes this vulnerability particularly dangerous is the level of access modern AI assistants have.

Browser-based AI tools like Claude are not passive, they can:

  • Read webpage content
  • Interact with applications
  • Execute multi-step tasks

When combined with prompt injection, this turns the assistant into a high-privilege attack vector, effectively acting on behalf of the attacker instead of the user.

This highlights a broader issue: trusted origin does not equal trusted intent.

Patch and Mitigation

The vulnerability has now been addressed through coordinated fixes:

  • Anthropic updated the extension to enforce stricter origin validation
  • The vulnerable XSS component in the trusted subdomain was patched

Users are strongly advised to update to the latest version of the Claude Chrome Extension, as older versions may still be vulnerable.

Additional security measures include:

  • Avoiding unnecessary browser extensions
  • Monitoring extension behavior and permissions
  • Treating AI-generated or externally injected prompts as untrusted
  • Implementing stricter browser and endpoint security controls

Why This Matters

This incident highlights a new class of vulnerabilities emerging in AI systems: prompt injection at the browser level.

As AI assistants gain more autonomy and deeper integration with user environments, they become increasingly attractive targets. Traditional security assumptions—like trusting internal domains—are no longer sufficient.

The takeaway is clear: securing AI systems requires not just protecting code, but also controlling how instructions are received, validated, and executed.

Resources


Comments

Popular posts from this blog

The Hidden Lag Killing Your SIEM Efficiency

Critical Vulnerability in Veeam Backup & Replication Exposes Enterprises to Remote Code Execution

Lotus Panda Hacks SE Asian Governments With Browser Stealers and Sideloaded Malware