The leak of Claude Code's source code from Anthropic has sent shockwaves through the AI community, raising concerns about security, strategy, and intellectual property. What makes it particularly notable is that Anthropic, an American artificial intelligence company, has built its reputation around strong security practices and strict controls, yet the leak stemmed from a basic packaging oversight. The leak happened on March 31 when Anthropic inadvertently leaked the complete source code for its flagship coding assistant, Claude Code, via a misconfigured source map file in the company's npm registry.
Cybersecurity professionals criticised the lapse, stating how even leading AI firms may be lagging in operational security, raising concerns about future risks as AI systems become more autonomous. The leak is also seen as a blow to Anthropic's operational reputation, especially as it reportedly prepares for a $380 billion IPO.
On the internet, the leak has triggered intense reactions, with many users both criticising and mocking the operational security practices at Anthropic and pointing out the obvious irony. Shakthi Vadakkepat, an active Enterprise AI Architect, called the lapse "the mothership of all code leaks," noting how the leak stemmed from something as basic as shipping a map file within an npm package.
"The big deal is that Anthropic is a company that prides itself on the level of security and controls they have in place, and then they ship a map file in their npm. The other thing is that they'll have a tough time suing the guy who created the repo on GitHub because he has essentially ported the code to Python, hence making the DMCA inapplicable here. And the logical argument would be that nothing was "hacked" per se; Anthropic essentially shipped the map file themselves," he wrote on X.

The Claude/Claw Code source code leaked online, and almost immediately, a new GitHub repo appeared: instructkr/claw-code.