Utilize the invoice payment gateway for checkout – order now, pay later. With a Coinsteam business account, you gain access to priority support, increased order limits, and competitive pricing for bulk orders available through quote requests.
Invoice Gateway : Upon placing your order, we will issue an order invoice followed by an invoice payment request, featuring a convenient payment gateway. Choose from flexible payment options including PayPal, Venmo, Apple Pay®, credit cards, debit cards, or ACH bank transfers.
Thank you for being a valued customer. We're looking forward to build steam for your projects.
A 60MB source map file in Anthropic's npm package exposed 512,000 lines of proprietary code, unreleased features, and internal model codenames — five days after another major data leak.
Five days after accidentally exposing an unreleased AI model through a misconfigured database, Anthropic just did it again — shipping the complete source code of its flagship coding tool to every developer on npm, for the second time in fourteen months.
Security researcher Chaofan Shou discovered this morning that Anthropic’s Claude Code npm package (version 2.1.88) contained a 59.8-megabyte source map file that linked directly to an unprotected Cloudflare R2 storage bucket hosting the entire, unobfuscated TypeScript codebase. Within hours, the code was mirrored across multiple GitHub repositories, dissected on Hacker News, and trending worldwide on X.
Claude code source code has been leaked via a map file in their npm registry!
The leak spans 1,900 TypeScript files and over 512,000 lines of code — the entire client-side harness that powers Claude Code’s agentic terminal, multi-agent orchestration, permission engine, IDE bridge, and dozens of unreleased features hidden behind internal feature flags. It is, by every measure, one of the most significant accidental IP exposures in the brief history of the AI agent race. And it comes at a moment when Claude Code’s footprint is impossible to ignore: SemiAnalysis estimates that 4% of all public GitHub commits are now authored by Claude Code, a figure projected to exceed 20% by year’s end.
4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026. While you blinked, AI consumed all of software development. Read more 👇 https://t.co/HzK4nbe2vyhttps://t.co/3rcmgk1hSfpic.twitter.com/E1kIjfrNgk
Source map files are standard debugging artifacts. They map minified production JavaScript back to the original, readable source code so developers can trace errors to specific lines. They are never supposed to ship in production packages. But Anthropic’s build pipeline uses Bun’s bundler, which generates source maps by default unless explicitly disabled — and nobody disabled it.
The published @anthropic-ai/claude-code package included a cli.js.map file that referenced the full, unminified original TypeScript hosted on Anthropic’s own R2 cloud storage bucket. Anyone could download the complete archive as a single src.zip file directly from the public URL. No authentication. No rate limiting. Just a link sitting in plain sight inside a package installed by tens of thousands of developers daily.
The irony cuts deep. Buried in the leaked codebase is a system called Undercover Mode — specifically designed to prevent internal codenames from appearing in git commits. The entire source then shipped in a .map file that, according to community analysis, was generated by Claude Code itself.
Déjà Vu: The February 2025 Incident
This is not the first time Anthropic has made this exact mistake. On February 24, 2025 — Claude Code’s literal launch day — developer Dave Shoemaker discovered an 18-million-character inline source map encoded in base64 within the minified cli.mjs file. Anthropic responded within two hours, releasing version 0.2.9 to remove the map and unpublishing the compromised package.
But it was already too late. Developer Daniel Nakov published the fully extracted source to GitHub the following day. Over the next month, Jeffrey Huntley published deobfuscation techniques, Lee Han Chung provided complete architecture breakdowns including system prompts, and Reid Barber released thorough technical analysis. Multiple forks preserved the code after the original repository was archived.
Fourteen months later, version 2.1.88 shipped the same vulnerability. Same mechanism. Same result. Except this time, the codebase is roughly 28 times larger, contains far more sophisticated architecture, and exposes a roadmap of unreleased features that Anthropic clearly intended to keep private.
What the Code Reveals
Claude Code is not a simple chatbot wrapper. The leaked architecture reveals a sophisticated runtime engine built with React and Ink for terminal rendering, powered by Bun instead of Node.js, with Zod v4 for validation and lazy-loaded OpenTelemetry and gRPC dependencies. The numbers alone tell the story:
46,000 lines — The query engine module handling LLM API calls, streaming, caching, tool-call loops, thinking mode, retry logic, and token counting
29,000 lines — The base tool system defining 40+ discrete, permission-gated tools for file operations, bash execution, web fetching, and LSP integration
50+ slash commands — A complete command system for everything from code review to git workflows
83 undocumented environment variables — Controlling hidden behaviors researchers have been cataloging for months
The multi-agent orchestration system is particularly revealing. Claude Code can spawn sub-agents (“swarms”) for parallelizable tasks, each operating with isolated contexts and specific tool permissions. A coordinator mode allows a single Claude instance to manage multiple worker agents simultaneously. The IDE bridge system enables bidirectional, JWT-authenticated communication between extensions for VS Code and JetBrains and the core CLI.
The Unreleased Features Nobody Was Supposed to See
The most explosive revelations are not in the shipping code — they are in the feature flags. The codebase contains references to capabilities that reveal where Anthropic is taking agentic AI next, and some of them are far more ambitious than anything the company has publicly discussed.
KAIROS: The Always-On Agent
Deep in the source is an entire operational mode called KAIROS — a persistent, always-running Claude assistant that does not wait for user input. KAIROS operates as an autonomous daemon, watching file changes, monitoring system activity, and proactively acting on patterns it notices. It maintains a private memory log and runs nightly “dreaming” processes to consolidate context across sessions. The code even includes midnight boundary handling to prevent dream process failures at day transitions.
This represents a fundamental departure from the current prompt-response paradigm. KAIROS is not a coding assistant you summon — it is an autonomous agent that lives on your machine.
BUDDY: The Tamagotchi in Your Terminal
Perhaps the most unexpected discovery: a fully-developed virtual pet companion system. When activated via /buddy, an ASCII art creature hatches in the user’s terminal, rendered in a speech bubble beside the input box. The system includes 18 species — duck, dragon, axolotl, capybara, mushroom, goose, and even a ghost — with rarity tiers ranging from common to a 1% legendary drop rate. Each pet tracks five stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK.
Claude generates personalized names and personalities for each pet on creation. The code includes sprite animations and floating heart effects. Internal comments reveal a planned rollout: an April 1-7 teaser period, followed by a full launch in May, starting with Anthropic employees.
ULTRAPLAN: Cloud-Scale Planning
ULTRAPLAN enables 30-minute remote cloud planning sessions where Claude Code offloads complex architectural reasoning to dedicated cloud compute rather than running it locally. This suggests Anthropic is building toward a hybrid local-cloud agent architecture where the terminal tool handles execution while heavy reasoning happens server-side.
Internal Model Codenames
The source contains references to unreleased model families labeled “Capybara” — the same internal codename that appeared in last week’s separate Mythos leak. Additional references to Opus 4.7 and Sonnet 4.8 suggest the next generation of Claude models is already being tested against the Claude Code harness.
Two Leaks in Five Days
The timing could not be worse for Anthropic. On March 26 — just five days before the Claude Code leak — cybersecurity researcher Alexandre Pauwels discovered that a misconfigured toggle switch in Anthropic’s content management system had left nearly 3,000 unpublished assets publicly accessible. Among them: draft blog posts, research papers, and detailed documentation for Claude Mythos, an unreleased model that Anthropic described internally as representing “a step change” in capabilities and posing “unprecedented cybersecurity risks.”
The Mythos leak rattled markets. Shares of CrowdStrike, Palo Alto Networks, Zscaler, and Fortinet fell as investors assessed what a model with advanced autonomous cybersecurity capabilities could mean for the threat landscape. Bitcoin slid alongside software stocks. Anthropic secured the data after being notified but could not unsay what the leaked documents had already revealed.
Now, five days later, the company has exposed the complete inner workings of the tool that would presumably deploy such models. The pattern is difficult to dismiss as coincidence — it suggests systemic gaps in Anthropic’s operational security culture.
The source code for Anthropic's Claude Code has reportedly been exposed via a misconfigured map file in their npm registry. The leak includes extensive internal scripts, unreleased AI features, and references to upcoming models. https://t.co/GB0hYNghqX
One aspect of the leaked code has drawn particular scrutiny from the developer community: Claude Code’s telemetry pipeline. The source reveals that usage data is routed through Datadog, with tracking that goes beyond standard error reporting. According to analysis of the exposed code, Claude Code monitors user frustration signals — including when users swear at the tool or repeatedly type “continue” — and logs these behavioral patterns.
While telemetry in developer tools is standard practice, the specificity of emotional-state tracking has raised eyebrows. The fact that these details emerged through an accidental leak rather than transparent documentation has amplified community concerns about what other data collection may exist in the pipeline. This is not the first time Claude Code’s internal workings have drawn security scrutiny — Check Point Research previously disclosed critical vulnerabilities (CVE-2025-59536) enabling remote code execution and API token exfiltration through malicious project hooks, which Anthropic patched before publication.
The Open-Source Gold Rush
The developer community moved fast. Within hours of Shou’s post, the leaked code was mirrored to multiple GitHub repositories. The most prominent — instructkr/claw-code, branded “Better Harness Tools” — became the fastest repository in GitHub history to surpass 30,000 stars. At scrape time, it had accumulated over 40,000 forks. A Rust port is already in progress on the repository’s dev/rust branch.
The excitement is not just voyeuristic. Claude Code’s harness architecture — the tool-calling system, multi-agent coordinator, permission engine, memory pipeline, and terminal UI — represents a complete, production-grade framework for building agentic AI applications. Developers are already planning forks that swap Anthropic’s API backend for local models via Ollama or LM Studio, OpenAI’s API, Mistral, or custom inference endpoints.
The result would be an instant open-source competitor to Claude Code that works offline, with any model, at any price point. For a company that has invested heavily in Claude Code as a competitive moat and distribution channel, this is the nightmare scenario.
Wes Bos, the popular web development educator, took a lighter approach to the leak — immediately diving into the source to catalog all 187 loading spinner verb phrases.
Claude Code leaked their source map, effectively giving you a look into the codebase.
I immediately went for the one thing that mattered: spinner verbs
Anthropic moved quickly after the discovery, pushing an npm update to remove the source map file and deleting older versions from the registry. DMCA takedown notices have been filed against GitHub mirrors. But as every previous code leak in the AI industry has demonstrated, once source code reaches GitHub, Reddit, and the LocalLLaMA community, it is effectively permanent. At least three major mirror repositories — including a detailed architectural breakdown — remained live at press time, and archive links were circulating freely across developer forums.
Anthropic has not issued a public statement about the incident as of publication. The company’s legal position is complicated by the fact that Claude Code ships under a proprietary, non-open-source license — meaning redistribution constitutes copyright infringement. Anthropic has already demonstrated willingness to enforce this: in March 2026, the company forced OpenCode to strip its Claude integration after legal demands, and updated its Terms of Service to explicitly prohibit third-party client arrangements.
But legal enforcement is a slow lever against a fast-moving open-source community. By the time DMCA takedowns land, the architectural patterns have already been studied, documented in community wikis, and reimplemented in clean-room rewrites that carry no copyright liability.
The Anti-Distillation Irony
Perhaps the sharpest irony in this leak is what it reveals about Anthropic’s own security priorities. The company recently published detailed research on detecting and preventing distillation attacks — industrial-scale campaigns by rival AI laboratories to extract Claude’s capabilities through fraudulent accounts. Anthropic identified three labs — DeepSeek, Moonshot, and MiniMax — running approximately 24,000 fake accounts to systematically copy Claude’s outputs.
The leaked Claude Code source contains explicit anti-distillation logic designed to prevent exactly this kind of model theft. Yet the code implementing those protections is now publicly available for anyone to study, understand, and circumvent. The company built sophisticated defenses against external threats while leaving the front door unlocked — twice.
What This Means for the Agent Race
The practical damage to Anthropic is bounded but real. No model weights were exposed. No user data was compromised. No core inference infrastructure was revealed. Claude Code is a client application — copying the remote control, as one analyst put it, does not give you the television.
But competitive intelligence does not require model weights. The leaked code provides a complete blueprint for how the most sophisticated AI agent harness in production actually works: how it orchestrates tools, manages permissions, handles multi-agent coordination, integrates with IDEs, and structures the complex dance between a language model and the developer’s local environment. Every competitor building agentic coding tools — from Cursor to Windsurf to the open-source community — now has a detailed technical reference for the decisions Anthropic made and the problems they solved.
The feature flag revelations are equally damaging from a competitive standpoint. KAIROS, BUDDY, ULTRAPLAN, and the coordinator mode represent Anthropic’s unreleased product roadmap, laid bare months before any of these features were intended to ship. Competitors can now prepare responses, build alternatives, or simply announce similar features first.
David K. Piano captured the moment’s absurdity: “Ironically this is probably the first time that actual humans are carefully and thoroughly reviewing the Claude Code codebase.”
The Bottom Line
Anthropic positions itself as the safety-conscious AI lab — the company that thinks carefully about risks, moves deliberately, and prioritizes responsible deployment. That narrative is difficult to sustain when the same DevOps mistake ships twice in fourteen months, when a CMS toggle exposes 3,000 confidential documents five days earlier, and when the code meant to prevent model theft gets handed to the entire internet through a forgotten configuration flag.
The developer community will build extraordinary things with what they found today. Open-source agent harnesses, model-agnostic coding assistants, novel multi-agent architectures — all accelerated by a 59.8-megabyte file that should never have left Anthropic’s build server. Whether that acceleration benefits Anthropic’s mission or undermines it depends entirely on whether the company can learn from a mistake it has now made twice. The source map shipped. The internet never forgets. And somewhere in a terminal, a virtual capybara is waiting to hatch.
Trending now