Claude Code Source Leak: All Hidden Features Exposed
One Missing Line That Changed Everything
On the morning of March 31, 2026, security researcher Chaofan Shou was doing something entirely routine — inspecting an npm package — when he spotted something that wasn't supposed to be there. Inside version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry sat a 59.8 MB JavaScript source map file, a .map artifact intended exclusively for internal debugging. Inside that single file was the complete TypeScript source code for Claude Code, Anthropic's flagship agentic CLI tool: nearly 1,900 files, over 512,000 lines of production code, and an unintentional window into one of the most closely guarded engineering projects in the AI industry.
Shou posted his discovery to X (formerly Twitter) by 4:23 AM ET. His post surpassed 28.8 million views. Within hours, the codebase was mirrored across dozens of GitHub repositories, forked thousands of times, and being analyzed by developers worldwide. What started as a build configuration oversight became, as Fortune described it, Anthropic's second major security exposure in less than a week — following a CMS misconfiguration on March 26 that had already leaked internal files related to the unreleased Claude Mythos model.
This is the definitive breakdown of how one missing line in a .npmignore file triggered the biggest source code leak in AI coding tool history.
How the Leak Happened: Understanding Source Maps
The JavaScript Build Pipeline Problem
To understand why this happened, it helps to understand what a source map actually is and why it exists. Browsers and JavaScript runtimes cannot execute TypeScript directly. Before any TypeScript project ships, it passes through a build pipeline that strips type annotations, bundles modules together, minifies variable names, and compresses hundreds of thousands of lines of readable code into a handful of dense, heavily obfuscated files that are nearly impossible for humans to read.
This transformation is intentional. Smaller file sizes mean faster load times. Obfuscated variable names make reverse-engineering harder. Minified output reduces network overhead. The tradeoff is that when something breaks in production, the error messages become completely meaningless — they point to line 1, column 48293 of a single minified file instead of the original readable source.
Source maps solve this problem. They are index files that create a precise mapping between every character of the obfuscated output and the corresponding line in the original source. With a source map, a crash in production can be traced back to the exact function, file, and line where the bug originated, making debugging dramatically faster. They are an essential internal development tool.
The critical distinction is that source maps are exclusively a development artifact. They should never ship with production packages. A source map file effectively contains the entire original source code of the project it maps — because that is precisely what it needs to reconstruct to perform its function.
The Bun Bundler and the Missing .npmignore
Claude Code is built on Bun, the high-performance JavaScript runtime that Anthropic acquired at the end of 2025. Bun's bundler generates source maps by default unless explicitly told not to. This behavior is documented, but there is an important caveat: a known Bun bug (filed as issue oven-sh/bun#28001 on March 11, 2026) causes source maps to be served even in production mode under certain configurations. The issue was still open at the time of the leak.
The compounding factor was a missing exclusion in Claude Code's npm publish configuration. When publishing a package to the npm registry, engineers can specify which files to include or exclude using either a files field in package.json or a .npmignore file. If a .map file is generated during the build process and no exclusion rule exists to filter it out, npm will include it in the published package — and expose it to every developer who installs or updates the tool.
As one post-mortem analysis on DEV Community bluntly summarized: "A single misconfigured .npmignore or files field in package.json can expose everything." That is exactly what happened. The packaged artifact for version 2.1.88 included the source map, which in turn referenced the complete unobfuscated TypeScript source hosted on Anthropic's own cloud infrastructure. Anyone who downloaded the package during the exposure window had direct access to the entire codebase.
The Timeline of Exposure
The sequence of events unfolded rapidly:
Version 2.1.88 published to npm with the source map file included
4:23 AM ET, March 31 — Chaofan Shou flags the discovery publicly on X
Within 30 minutes — Community developers begin mirroring the source on GitHub
Within hours — Multiple archived repositories are publicly indexed with 80,000+ stars and forks
Anthropic responds — The affected npm version is pulled from the registry
Community analysis begins — Developers systematically catalog every hidden feature, internal codename, and unreleased system in the codebase
Anthropic's official statement — A spokesperson confirms it was a "release packaging issue caused by human error, not a security breach"
By the time Anthropic pulled the package, the code had already propagated beyond any practical containment. The internet, as it reliably does, kept a permanent copy.
The Scale of What Was Exposed
The leaked codebase is not a weekend project or a thin API wrapper. It is a substantial piece of production engineering:
512,000+ lines of TypeScript source code
~1,900 individual source files
785KB main entry point (
main.tsx) alone40+ distinct tool implementations
108+ feature-flagged modules
5 permission system tiers in a cascade architecture
5 distinct context compaction strategies
1,000+ feature flag references scattered across 250 files
A code quality assessment performed using Claude Code itself gave the codebase a 7 out of 10. TypeScript type safety is solid, with only 38 uses of any across 500+ files. Error handling is robust. Async patterns are modern throughout, with zero callback hell and only 258 promise chains. Naming conventions are consistent. Dead code is minimal, with very few commented-out code blocks.
On the negative side, the codebase contains several "god files" exceeding 5,000 lines each, feature flags scattered throughout business logic rather than centralized, significant environment variable sprawl that increases leak surface area, and numerous TODO comments that appear to be aging technical debt. Critically, no test files were included in the source map — though this is expected, since test files are not part of the production build artifact.
Every Hidden Feature Exposed by the Leak
KAIROS: The Always-On Background Agent
The most significant product revelation from the entire leak is a feature called KAIROS — named after the Ancient Greek concept of "the right moment." Referenced over 150 times in the source code, KAIROS represents a fundamental architectural shift in how Claude Code operates.
Where the current Claude Code is entirely reactive — waiting for a user to type a prompt before doing anything — KAIROS transforms it into a persistent daemon mode that runs continuously in the background. KAIROS maintains append-only daily log files throughout the day, recording observations, decisions, and actions. It subscribes to GitHub webhooks, monitoring pull requests, reviews, and code changes in real time. It receives periodic <tick> prompts and independently decides whether to act or wait based on current context.
Complementing KAIROS is a sub-process called autoDream — an automatic memory consolidation routine that activates when the user is idle. The autoDream logic runs as a forked subagent specifically to avoid corrupting the main agent's reasoning context. It merges disparate observations from the day's session, removes logical contradictions between memories, and converts vague insights into concrete, verified facts. When the user returns, the agent's context is coherent and highly relevant, without the user needing to re-establish context from scratch.
KAIROS also includes a /dream skill for nightly memory distillation and has access to a read-only bash shell, enabling it to inspect the file system and gather context even while the user is away. The implementation is described as heavily feature-gated, indicating the feature is in active development but not yet close to public release.
Dream Mode: AI That Plans While You Sleep
Closely tied to KAIROS but architecturally distinct is Dream Mode — a background planning system designed to run autonomous ideation and project analysis while the user is not actively working. Dream Mode functions like the planning layer of the proactive agent: rather than executing tasks, it thinks about what could be built, what improvements are possible, and what directions the current project could take.
The practical result is that a developer could wake up each morning to a pre-analyzed set of suggestions, architectural observations, and planning outputs that were generated during idle hours. This shifts Claude Code from a tool that responds to instructions into something closer to a persistent collaborator that maintains ongoing awareness of a project.
Coordinator Mode: Multi-Agent Orchestration
Coordinator Mode transforms a single Claude Code instance into an orchestrator that can spawn and manage multiple parallel worker agents. Each worker receives its own complete tool access — including file system, shell execution, and code editing capabilities — along with specific, bounded instructions defining its task.
The architecture allows a developer to effectively run five or more Claude agents simultaneously, each working on a different aspect of a complex task, coordinated by a central orchestrator that manages dependencies, merges results, and handles conflicts. This is the infrastructure behind what the second YouTube transcript describes as Claude Code evolving into a "self-sustaining AI employee."
A notable engineering detail: worker agents share a prompt cache with the coordinator rather than each paying full input token costs independently. This shared caching architecture significantly reduces the cost of running multi-agent sessions, which would otherwise be prohibitively expensive at scale.
ULTRAPLAN: 30-Minute Remote Planning Sessions
ULTRAPLAN is a feature designed for particularly complex, long-horizon planning tasks. Instead of running planning locally, ULTRAPLAN offloads the task to a remote Cloud Container Runtime session running Opus — Anthropic's most capable model — with up to 30 dedicated minutes of compute time. The user receives a notification on their phone or browser when the plan is ready for review and approval. Once approved, a special sentinel value __ULTRAPLAN_TELEPORT_LOCAL__ transfers the result back to the local terminal session.
This feature complements ULTRAPLAN's counterpart, Ultra Review, which applies the same remote agent architecture specifically to automated code review — consistent with Anthropic's previously announced code review feature that was quoted as averaging approximately $25 per pull request.
Auto Mode: Intelligent Permission Management
Current Claude Code users face a binary choice: YOLO mode, which auto-approves every tool call without asking, or standard mode, which requires explicit human approval for every action. Auto Mode introduces a third path: an ML classifier that runs silently on each pending tool action and decides independently whether the action is safe enough to self-approve or risky enough to escalate to the user.
The classifier evaluates context, scope of potential impact, reversibility, and whether the action falls within previously established patterns of user behavior. The result is a permission layer that behaves like a knowledgeable colleague rather than a binary switch — approving routine edits without interruption while flagging anything that looks unusual or destructive for human review. This integrates naturally with Anthropic's recently released messaging integrations via Telegram and iMessage, enabling the agent to literally text the user for approval on ambiguous actions.
BUDDY: The AI Companion System
Perhaps the most unexpected discovery in the codebase is a fully implemented companion system called BUDDY. Each user receives a deterministic creature companion — generated from their user ID using a Mulberry32 pseudorandom number generator — from a pool of 18 species with rarity tiers ranging from common to legendary, including a 1% chance of a "shiny" variant. Each companion carries RPG-style stats including DEBUGGING, PATIENCE, SNARK, and CHAOS.
The companion has its own system prompt describing it as a small named creature that sits beside the user's input box and occasionally comments in a speech bubble. Critically, the companion is architecturally separate from the main Claude Code agent — it is a distinct watcher with its own personality that can respond when addressed by name. Species names in the codebase are encoded using String.fromCharCode() calls specifically to evade build-system grep checks, suggesting deliberate effort to keep the feature concealed from automated tooling. The code references April 1–7, 2026 as a teaser window, with a full public launch gated for May 2026.
Undercover Mode: Stealth Open-Source Contributions
Among all the revelations, the one that generated the most controversy is a feature called Undercover Mode, implemented in approximately 90 lines in undercover.ts. The system auto-activates whenever an Anthropic employee (identified by a USER_TYPE === 'ant' flag) is working in a public or open-source repository.
When active, Undercover Mode injects a system prompt that explicitly instructs Claude Code never to mention internal codenames like "Capybara" or "Tengu," never to reference internal Slack channels or repository names, never to use the phrase "Claude Code" anywhere in generated content, and never to include Co-Authored-By attribution lines in commit messages or pull request descriptions. The instruction in the source reads: "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
The mode can be forced on with a CLAUDE_CODE_UNDERCOVER=1 environment variable, but there is no force-off override. If the system cannot confirm it is operating in a private internal repository, it defaults to undercover behavior. In external builds, the entire feature is dead-code-eliminated.
The debate in the community centers on whether this constitutes reasonable protection of internal information or a systematic framework for AI contributions to appear human-written in public repositories. For enterprise competitors, the architectural pattern itself — stripping AI attribution at the tooling level — is now a fully documented reference implementation.
Anti-Distillation Mechanisms
Anthropic has been publicly vocal about concerns that competitors, particularly certain AI labs, are using API traffic from Claude runs to train competing models — a practice known as model distillation. The source code reveals two active technical countermeasures.
The first, gated behind a GrowthBook feature flag called tengu_anti_distill_fake_tool_injection, injects fake tool definitions into API requests. Any system recording Claude Code's API traffic to build a training dataset will capture these fabricated tool calls, polluting the training data with fictional interactions that degrade the resulting model's behavior.
The second mechanism operates server-side: it buffers the assistant's reasoning text between tool calls, compresses it into a summary, and returns only the summary with a cryptographic signature attached. An eavesdropper capturing the API traffic would receive only summaries rather than full chain-of-thought reasoning. The original text can be reconstructed from the signature for legitimate use, but the training signal for a distillation attacker is significantly degraded.
Community analysis on Hacker News noted that both mechanisms are relatively easy to circumvent with a proxy setup, suggesting their primary value may be legal rather than technical — creating a paper trail of deliberate countermeasures rather than actually stopping determined distillation attempts.
Unreleased Model Codenames Revealed
The codebase contains explicit references to internal model codenames that shed light on Anthropic's near-term development roadmap:
Capybara — Internal codename for a Claude 4.6 variant. Internal comments note the team is already iterating on Capybara v8, but the model faces a 29–30% false claims rate in v8, which is a regression from the 16.7% rate seen in v4. Code also references an "assertiveness counterweight" designed to prevent the model from being too aggressive in refactoring decisions.
Fennec — Internal codename mapping to Opus 4.6, the current flagship reasoning model.
Numbat — An unreleased model variant still in testing at the time of the leak, with limited detail available in the source.
Tengu — Referenced repeatedly in feature flag names and internal system prompts as a codename for an internal Claude model variant used in development tooling.
These codename revelations are considered by industry analysts to be among the most strategically damaging aspects of the leak. As one Hacker News commenter observed, the feature flag names alone reveal more about Anthropic's product strategy than any code implementation detail.
The Security Fallout: Beyond the Leak Itself
The Malicious Axios Incident
The leak generated a secondary, more immediately dangerous security event. Attackers who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have received a malicious version of the axios HTTP client library (versions 1.14.1 or 0.30.4) bundled with a dependency called plain-crypto-js. This trojanized dependency contained a cross-platform Remote Access Trojan. Any developer who installed Claude Code during that window is advised to treat the affected machine as fully compromised, rotate all secrets immediately, and perform a clean OS reinstallation.
npm Package Squatting
The leaked source code revealed several internal package names that were not published to the npm registry — internal workspace dependencies that exist in Anthropic's monorepo but have never been publicly released. Within hours of the leak becoming public, a user registered these package names on npm using a disposable email address. These packages currently contain empty stubs, but the squatting pattern is a classic setup for a future malicious update that would hit anyone who attempts to compile the leaked source code and inadvertently installs these packages. Developers experimenting with the leaked codebase should be extremely cautious about npm install behavior.
The DMCA Response
Anthropic responded to the proliferating GitHub mirrors with a cascade of DMCA takedown requests — reportedly more than any other AI company in history for a single incident. The response drew significant criticism when it emerged that some takedowns were directed at developers who had forked the official Claude Code GitHub repository — a repository that does not include the leaked source code and contains only plugins, skills, and documentation. Issuing DMCA requests against forks of a clean repository generated substantial negative community sentiment and was widely described as an overreach that compounded the original reputational damage.
Anthropic's Official Response
Anthropic's spokesperson issued the following statement to The Register: "Earlier today, a Claude Code release included some internal source code. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The company confirmed that no sensitive customer data or credentials were involved or exposed.
The statement's framing — attributing the incident to "human error" rather than any agent or automated tooling — struck many observers as deliberately careful phrasing, given that Claude Code is used extensively to build Claude Code itself.
Notably, this was Anthropic's second significant accidental disclosure within a week. Five days earlier, on March 26, a CMS misconfiguration exposed approximately 3,000 internal files covering details of the unreleased Claude Mythos model. Two significant operational security failures in under a week at a company whose primary value proposition includes safety and responsible AI development generated pointed commentary across the industry.
Preventive Measures: How to Stop This from Happening to Your Team
The Claude Code leak is a textbook case of a class of vulnerability that is entirely preventable with proper build pipeline hygiene. Every engineering team shipping packages — whether npm packages, Python distributions, or any bundled artifact — should audit their release process against the following controls.
1. Audit Your .npmignore and package.json files Field
The single most direct prevention for this exact vulnerability:
# .npmignore — add these exclusions immediately
*.map
*.map.js
**/*.map
dist/**/*.map
build/**/*.map
.sourcemaps/Alternatively, use the files field in package.json to explicitly whitelist only the files that should be published, rather than blacklisting files that should not:
json
{
"files": [
"dist/",
"bin/",
"README.md",
"LICENSE"
]
}The allowlist approach is more robust: anything not explicitly included cannot accidentally ship.
2. Disable Source Map Generation for Production Builds
Configure your bundler to never generate source maps in production builds:
javascript
// bun build configuration
await Bun.build({
entrypoints: ['./src/index.ts'],
outdir: './dist',
sourcemap: 'none', // ← critical: never generate source maps for prod
minify: true,
});javascript
// webpack equivalent
module.exports = {
mode: 'production',
devtool: false, // ← disables source map generation entirely
};If source maps are needed for internal production debugging, use a private source map hosting service such as Sentry. Upload source maps directly to Sentry's servers post-build, then delete the .map files from the build output before packaging. Your error tracking retains full source resolution; your users never see the source.
3. Run npm pack --dry-run Before Every Release
Make this a mandatory pre-release gate in your CI/CD pipeline:
bash
# Run before every npm publish
npm pack --dry-run
# Inspect output for any .map files
npm pack --dry-run 2>&1 | grep '\.map'If the grep returns any results, the release must be blocked. This single command would have caught the Claude Code leak before it reached the registry.
4. Implement Automated Pre-Publish Validation
Add a pre-publish script to package.json that automatically fails if any source map files are detected:
json
{
"scripts": {
"prepublishOnly": "node scripts/validate-publish.js"
}
}javascript
// scripts/validate-publish.js
const { execSync } = require('child_process');
const output = execSync('npm pack --dry-run').toString();
if (output.includes('.map')) {
console.error('ERROR: Source map files detected in publish artifact.');
console.error('Remove .map files before publishing.');
process.exit(1);
}
console.log('Publish validation passed.');5. Separate Debug and Release Build Pipelines
Maintain two distinct build configurations — one for internal development with source maps enabled, and one for release builds with all debug artifacts stripped:
build:dev → includes source maps, readable output, no minification
build:release → no source maps, minified, stripped of all debug artifacts
publish → ONLY runs after build:release, never after build:devAutomate enforcement so that the publish command is structurally impossible to run against a development build output. Many teams trigger accidental leaks not because they don't know better, but because a developer ran the wrong build command under time pressure.
6. Centralize Secret and Sensitive Data Management
The code quality review of Claude Code's source identified "significant environment variable sprawl" as a contributing factor to the broader security posture — sensitive data scattered across hundreds of files rather than centralized through a validated schema. This pattern makes it dramatically easier for secrets and sensitive configuration to slip through logging, error messages, or build artifacts accidentally.
Centralize all environment variables through a typed, validated schema at application startup. Every sensitive value should pass through a single sanitization layer before it can reach any log output or external service call.
7. Institute Mandatory Pre-Release Security Checklists
Build institutional process around releases, not just technical controls. A pre-release checklist for every npm package publish should include:
✅ Confirm build ran against release configuration, not development
✅ Verify npm pack --dry-run output contains no .map files
✅ Confirm no internal workspace package names are referenced in package.json dependencies
✅ Verify bundler source map setting is explicitly set to none or false
✅ Confirm .npmignore or files allowlist is current and accurate
✅ Second engineer reviews the checklist before publish executes
A two-person rule for production releases — where one engineer publishes and a second independently verifies the pre-publish validation — would have caught the Claude Code incident before it reached a single developer's machine.
8. Invest in Dependency Security Monitoring
The Claude Code incident demonstrated that supply chain attacks move within minutes of a high-profile leak. The window between Shou's tweet and the appearance of malicious axios versions was measured in hours. Teams should:
Use lock files (
package-lock.json,bun.lockb,yarn.lock) and pin all dependency versionsEnable npm audit in CI/CD pipelines and block deploys on high-severity findings
Monitor newly registered npm packages that share names with internal dependencies
Use private registries for internal packages rather than leaving namespace gaps on the public registry
What the Leak Tells Us About the State of AI Tooling
Beyond the security implications, the Claude Code source code reveal is instructive as an engineering case study. The internal architecture shows a production-grade agentic system far more sophisticated than its external documentation suggests — a multi-threaded orchestration platform with persistent memory, multi-agent coordination, intelligent permission management, and background planning capabilities that are not yet visible to any external user.
The benchmark data embedded in the codebase — showing Opus 4.6 scoring 77% on agent benchmarks through Claude Code compared to 93% when using cursor's harness — raises important questions about the relationship between model capability and harness quality. The most capable model available today performs significantly worse through its native tooling than through third-party implementations. This is a systemic issue that the leaked architecture of Coordinator Mode and ULTRAPLAN appears designed to address.
The community consensus emerging from post-leak analysis is that the moat for AI coding tools is not the CLI harness — it is the underlying model. Competitors who read the source code now understand Claude Code's architecture in detail, but they cannot replicate Sonnet or Opus. The strategic damage from the leak is real, but it lies primarily in the revealed roadmap and feature flags, not in the code itself.
The Path Forward
Anthropic faces a clear strategic inflection point. Google's Gemini CLI and OpenAI's Codex are both open source. Claude Code is now effectively source-available whether Anthropic intended it or not. The community argument for formal open-sourcing — with a defined timeline, cleaned commit history, and a proper repository structure — has never been stronger. The "secret sauce" argument for keeping it closed no longer holds weight when the sauce is already on the internet.
For the broader developer community, the incident is a reminder that even the most sophisticated engineering organizations can be undone by a missing line in a config file. The complexity of modern build pipelines — multiple bundlers, runtime environments, registry configurations, and deployment steps — creates ample surface area for exactly this kind of accidental exposure. The defenses are not complicated. They just require the discipline to implement them before the incident, not after.
Essential Build Pipeline Security Checklist
✅ Source Map Exclusion — .npmignore or files allowlist explicitly blocks all .map files
✅ Build Configuration Audit — Production builds have source maps set to none or false
✅ Pre-Publish Dry Run — npm pack --dry-run is a mandatory CI gate that blocks on .map findings
✅ Automated Validation Script — prepublishOnly hook runs before every registry publish
✅ Separated Build Pipelines — Debug and release builds are structurally isolated from each other
✅ Two-Person Release Rule — Second engineer verifies pre-publish checklist before publish executes
✅ Dependency Lock Files — All dependencies are version-pinned and regularly audited
✅ Private Registry for Internal Packages — Internal workspace packages have reserved names on public registries
One missing line in a config file exposed 512,000 lines of proprietary source code. The fix is not expensive, not technically complex, and not time-consuming to implement. What it requires is treating the release pipeline with the same rigor you bring to the code that runs through it.