Claude Code Security Incident: What Professionals Need to Know About the Source Leak
Anthropic's flagship AI coding agent accidentally exposed its entire codebase. Here's what enterprise users need to do right now to protect their environments.
On March 31, 2026, Anthropic accidentally exposed the full source code of Claude Code (its flagship terminal-based AI coding agent) through a 59.8 MB JavaScript source map (.map) file bundled in the public npm package @anthropic-ai/claude-code version 2.1.88 . For the hundreds of thousands of professionals who rely on Claude Code in business environments, this isn't just a tech industry embarrassment — it's an urgent security concern that requires immediate action.
The leak is particularly serious because pre-existing flaws (e.g., CVE-2025-59536, CVE-2026-21852, RCE and API key exfiltration via malicious repo configs, hooks, MCP servers, and env vars) are now far easier to weaponize. Threat actors with full source visibility can craft precise malicious repositories or project files that trigger arbitrary shell execution or credential theft simply by cloning/opening an untrusted repo .
What Happened: The Technical Details That Matter
Security researcher Chaofan Shou (@shoucccc on X), an intern at Solayer Labs, discovered that version 2.1.88 of the @anthropic-ai/claude-code npm package shipped with a 59.8 MB JavaScript source map file (cli.js.map). He posted the download link to X at approximately 4:23 AM ET . Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers .
The root cause was deceptively simple: Claude Code uses the Bun runtime for its build process, and Bun generates source maps by default. Someone needed to add *.map to the .npmignore file. Nobody did . Making matters worse, Anthropic acquired the Bun JavaScript runtime at the end of 2025, and Claude Code is built on top of it. A known Bun bug (issue #28001, filed on March 11, 2026) reports that source maps are served in production builds even when the documentation says they shouldn't be. The bug was open for 20 days before this happened .
Why Enterprise Users Should Be Concerned
This leak creates three immediate security risks for professional environments:
1. Perfect Storm Supply Chain Attack
The timing couldn't have been worse. The leak coincided exactly with a separate malicious Axios npm supply chain attack (RATs published March 31, 00:21–03:29 UTC), creating a perfect storm for anyone updating Claude Code via npm that day . If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT) .
2. Weaponized Malicious Repositories
The repository looks like it's trying to pass itself off as leaked TypeScript source code for Anthropic's Claude Code CLI. The README file even claims the code was exposed through a .map file in the npm package and then rebuilt into a working fork with "unlocked" enterprise features and no message limits. The repository link appears near the top of Google results for searches like "leaked Claude Code," which makes it easy for curious users to encounter .
3. Accelerated Vulnerability Exploitation
The leaked code reveals exactly how Claude Code's security systems work, making existing vulnerabilities far easier to exploit. Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to "trick" Claude Code into running background commands or exfiltrating data before you ever see a trust prompt .
What Was Exposed: The Security Architecture Blueprint
The leaked source code provides a complete roadmap of Claude Code's internal workings, including:
Hidden Features and Capabilities
KAIROS is referenced over 150 times in the source code. Named after the Ancient Greek concept of "the right moment" (the opportune time to act), it represents a fully built but unshipped autonomous daemon mode for Claude Code . KAIROS represents a fundamental shift in user experience: an autonomous daemon mode. While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called autoDream .
Security Control Bypass Methods
The source code reveals multiple ways attackers can bypass Claude Code's built-in security controls. For example, Check Point Research has discovered critical vulnerabilities in Anthropic's Claude Code that allow attackers to achieve remote code execution and steal API credentials through malicious project configurations. The vulnerabilities exploit various configuration mechanisms including Hooks, Model Context Protocol (MCP) servers, and environment variables -executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories .
"Undercover Mode" for Stealth Operations
Perhaps most concerning for enterprise environments is the discovery of Undercover Mode. The technical reality is that Anthropic employees use Claude Code on open-source projects, and this mode prevents internal details from leaking through commit metadata. The ethical concern is that AI-generated code enters open-source repositories without attribution to either the AI or the company .
How to Protect Your Claude Code Installation Right Now
If your organization uses Claude Code, take these immediate steps:
Step 1: Check for Compromised axios Dependencies
You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js . Run this command in your terminal:
grep -r "axios.*1.14.1\|axios.*0.30.4\|plain-crypto-js" package-lock.json yarn.lock bun.lockb 2>/dev/null
If found, immediately downgrade axios and rotate all API keys, environment variables, and secrets that may have been accessed.
Step 2: Update Claude Code Immediately
If you're using Claude Code: Update immediately past v2.1.88 and use the native installer going forward (curl -fsSL https://claude.ai/install.sh | bash) . Avoid npm installations until Anthropic provides additional security guidance.
Step 3: Audit Repository Configuration Files
The leaked source reveals that an attacker publishes a legitimate-looking GitHub repository containing a CLAUDE.md file — a standard configuration file Claude Code reads automatically when entering a project directory. When a developer clones the repository and asks Claude Code to build the project, the compound command exceeds the 50-subcommand threshold, deny rules are skipped, and credentials are silently exfiltrated .
Before opening any repository with Claude Code, manually inspect these files:
.claude/settings.jsonCLAUDE.md.mcp.json- Any files containing MCP server configurations
Step 4: Implement Repository Security Controls
Given the new attack vectors revealed by the leak, implement these enterprise-level controls:
- Repository Allowlisting: Only allow Claude Code to operate on pre-approved, internally audited repositories
- Network Segmentation: Run Claude Code in isolated environments that cannot access production credentials or sensitive data
- API Key Rotation: Immediately rotate all Anthropic API keys and implement shorter rotation cycles
- Monitoring and Logging: Enable comprehensive logging for all Claude Code API usage and file access patterns
The Broader Implications for Enterprise AI Adoption
This incident highlights a fundamental challenge in enterprise AI deployment: These vulnerabilities in Claude Code highlight a critical challenge in modern development tools: balancing powerful automation features with security . As organizations increasingly integrate AI tools into their development workflows, they must also evolve their security frameworks to address new attack vectors.
For compliance-conscious organizations, this leak underscores the importance of treating AI tools as critical infrastructure components that require the same level of security oversight as any other enterprise software. Our Claude Compliance API guide provides detailed frameworks for implementing proper governance around AI tool usage in regulated environments.
Supply Chain Security Lessons
The incident also demonstrates how quickly threat actors can weaponize leaked information. Although the incident stemmed from a simple packaging mistake, threat actors were quick to capitalize on the resulting attention. Only 24 hours after the leak, they were able to create fake GitHub repositories to distribute credential-stealing malware disguised as "leaked" Claude Code downloads .
Organizations need automated detection systems that can identify when employees access potentially compromised or malicious repositories. This is where tools like our GDPR Compliance Wizard become valuable — helping organizations maintain audit trails and compliance even when individual tools become compromised.
What to Watch: Coming Developments and Ongoing Risks
What's more, attackers are already capitalizing on the leak to typosquat internal npm package names in an attempt to target those who may be trying to compile the leaked Claude Code source code and stage dependency confusion attacks . Security teams should monitor for:
- Dependency Confusion Attacks: Malicious packages designed to exploit organizations trying to rebuild Claude Code from the leaked source
- Sophisticated Prompt Injection: Now that the security prompt architecture is public, expect more targeted attacks against Claude Code's AI safety mechanisms
- Advanced Persistent Threats: Nation-state actors may use the leaked architecture to develop highly targeted attacks against organizations using Claude Code in sensitive environments
Vulnerability Discovery Acceleration
The leak has already led to the discovery of new vulnerabilities. Within days of each other, Anthropic first leaked the source code to Claude Code, and then a critical vulnerability was found by Adversa AI. On March 31, 2026, Anthropic mistakenly included a debugging JavaScript sourcemap for Claude Code v2.1.88 to npm. Within hours, researcher Chaofan Shou discovered the sourcemap and posted a link on X .
Security researchers are now conducting systematic analysis of the leaked code to identify additional vulnerabilities. Organizations should expect a steady stream of new Claude Code CVEs over the coming months.
Our Take: Enterprise AI Security Must Evolve
The Claude Code leak represents more than a simple packaging error — it's a preview of the security challenges that await as AI tools become more deeply integrated into enterprise workflows. The fact that a single misconfigured build file can expose 512,000 lines of critical AI infrastructure code should serve as a wake-up call for any organization deploying AI tools at scale.
For professionals working with Claude in complex workflows, this incident highlights the value of proper AI tool governance. Our guide to Claude Managed Agents provides frameworks for deploying AI tools in ways that minimize exposure to these types of supply chain risks.
Building Resilient AI Workflows
Rather than abandoning AI tools due to security concerns, organizations should focus on building resilient workflows that can withstand individual tool compromises. This includes:
- Multi-layered Security: Never rely solely on an AI tool's built-in security controls
- Continuous Monitoring: Implement real-time monitoring for AI tool behavior and API usage patterns
- Incident Response Planning: Develop specific playbooks for AI tool compromises and source code leaks
- Vendor Risk Management: Regularly assess the security practices of AI tool vendors and have backup plans when primary tools become compromised
Immediate Next Steps for Claude Code Users
Don't wait for your security team to catch up with AI tool risks. Take these actions today:
- Audit Your Environment: Check all systems that have Claude Code installed for the compromised axios versions
- Update Immediately: Move to Claude Code versions newer than 2.1.88 using the official installer, not npm
- Review Repository Access: Audit all repositories that your team has opened with Claude Code in the past 30 days
- Rotate Credentials: Change all API keys, environment variables, and secrets that Claude Code might have accessed
- Implement Monitoring: Set up logging and monitoring for Claude Code API usage patterns
- Train Your Team: Educate developers about the new repository-based attack vectors and the importance of reviewing configuration files before opening projects
The Claude Code leak won't be the last time we see AI tool security incidents of this magnitude. Organizations that adapt their security practices now will be better positioned to handle the next inevitable AI supply chain compromise. For additional guidance on implementing robust AI governance frameworks, explore our AI Search Visibility Accelerator course, which covers enterprise-grade AI tool deployment strategies.
The future of AI in professional environments depends not just on the capabilities of tools like Claude Code, but on our collective ability to deploy them securely. This incident provides valuable lessons for building that secure future.