Axios npm Supply Chain Attack: What AI Engineers Must Know
While everyone focused on the Claude Code source leak dominating headlines yesterday, a far more dangerous incident was unfolding in parallel. On March 31, 2026, attackers compromised the Axios npm package and deployed Remote Access Trojans on thousands of developer machines within a three hour window. If you ran npm install for Claude Code or any JavaScript project during that window, your credentials may already be in the hands of North Korean threat actors.
This is not theoretical. Google Threat Intelligence has attributed the attack to UNC1069, a financially motivated threat group with a history of targeting developers. The attack exploited something AI engineers depend on daily: the npm dependency chain.
What Happened
| Timeline | Event |
|---|---|
| March 31, 00:21 UTC | Attacker publishes axios@1.14.1 and axios@0.30.4 |
| 00:21-03:15 UTC | Malicious versions live on npm (~3 hours) |
| 03:15 UTC | npm removes compromised packages |
| April 1 | Google attributes attack to North Korean actors |
The attacker compromised jasonsaayman’s npm account, the lead Axios maintainer. They changed the registered email to an attacker controlled ProtonMail address and published two poisoned versions containing a hidden dependency called plain-crypto-js.
With over 100 million weekly downloads, Axios is foundational to the JavaScript ecosystem. Every major AI coding tool, including Claude Code, uses it for HTTP requests. The malicious packages installed immediately when developers ran npm install, stealing credentials and deploying a cross platform RAT.
Why AI Engineers Were Specifically Targeted
This attack hit AI developers disproportionately hard for several reasons.
High value credentials. AI engineers typically have access to cloud APIs, model endpoints, and infrastructure credentials worth thousands in monthly billing. Environment variables containing OpenAI keys, Anthropic tokens, and AWS credentials were primary targets.
Frequent npm operations. AI development involves constant package updates as tools like Claude Code evolve rapidly. The attack coincided with a Claude Code update, increasing the exposure window.
CI/CD amplification. Many AI teams run automated pipelines that execute npm install on every commit. A single compromised developer machine could have propagated the RAT across entire build infrastructure.
The RAT exfiltrated credentials within seconds of installation, then attempted to delete forensic evidence. By the time developers noticed anything unusual, the damage was done.
How to Detect If You Were Affected
Run these commands immediately to check your systems.
Check package lockfiles:
Search your project for the compromised versions by looking for axios@1.14.1, axios@0.30.4, or plain-crypto-js in your package-lock.json or yarn.lock files. Any match indicates potential compromise.
Check for RAT artifacts:
The malware deployed platform specific payloads. On macOS, look for /Library/Caches/com.apple.act.mond. On Windows, check for %PROGRAMDATA%\wt.exe, %TEMP%\6202033.vbs, or %TEMP%\6202033.ps1. On Linux, check /tmp/ld.py.
Network indicators:
The RAT communicated with sfrclak.com at IP 142.11.206.73 on port 8000. Review your network logs for connections to this infrastructure. The malware disguised traffic as legitimate npm requests to packages.npm.org.
If you find any of these indicators, treat the machine as fully compromised. Isolate it from your network immediately.
Remediation Steps
If you ran npm install between 00:21 and 03:15 UTC on March 31, 2026, take these actions now.
Immediate actions:
First, downgrade to safe Axios versions. The last known safe versions are 1.14.0 and 0.30.3. Remove your node_modules directory and package-lock.json, then reinstall with the pinned safe version.
Second, rotate all credentials. This includes npm tokens, AWS/Azure/GCP access keys, SSH keys, database credentials, API tokens for AI services like OpenAI and Anthropic, and any secrets stored in .env files. Assume everything accessible from the compromised machine has been exfiltrated.
Third, clear package caches on all workstations and build servers to prevent reinfection from cached malicious packages.
System recovery:
For confirmed compromises, re-image the system or restore from a backup taken before March 30, 2026. The RAT established persistence mechanisms that survive simple package removal. Check shell profiles like .bashrc and .zshrc for unauthorized modifications.
This remediation process mirrors what you should have documented in your deployment checklist for security incidents.
Prevention for AI Development Workflows
This attack exploited standard npm practices that most teams consider acceptable. Here’s how to harden your AI development security.
Pin exact versions. Never use floating ranges like ^1.14.0 or ~1.14.0. Require exact pins (1.14.0) in package.json and commit package-lock.json to version control. This prevents npm from auto upgrading to malicious versions.
Use npm ci instead of npm install. The npm ci command ignores package.json and installs exactly what’s in your lockfile. This would have prevented the attack for teams with committed lockfiles.
Implement package cooling periods. Configure your npm proxy (Artifactory, Nexus, or Verdaccio) to delay new package versions for 24 to 72 hours. This attack was detected and removed within three hours. A cooling period would have protected you completely.
Verify build provenance. The legitimate axios@1.14.0 was published via GitHub Actions with OIDC Trusted Publishing. The malicious 1.14.1 was published manually using a stolen npm token with no corresponding GitHub tag. Automated provenance checking would have flagged this immediately.
For AI coding tools specifically, consider using native installers instead of npm. Anthropic now recommends the standalone binary installer for Claude Code precisely because it bypasses the npm dependency chain.
The Broader Supply Chain Lesson
This attack is part of a pattern. Between March 19 and March 27, 2026, the same threat group compromised four widely used open source projects: the Trivy vulnerability scanner, the KICS infrastructure scanner, LiteLLM on PyPI, and the Telnyx communications library.
AI engineers face elevated supply chain risk because our tools depend on rapidly evolving ecosystems. The MCP protocol, LangChain, LlamaIndex, and dozens of other AI libraries pull in hundreds of transitive dependencies. Each dependency is a potential attack vector.
Building production safeguards into your AI development workflow is no longer optional. The same rigor you apply to securing AI agents must extend to your development environment itself.
Warning: If you discover malicious packages or RAT artifacts on your system, do not attempt to clean them manually and continue working. Credential theft happens instantly. Any secrets that were accessible on that machine should be considered compromised, even if you remove the malware quickly.
Frequently Asked Questions
How do I know if my Claude Code installation was affected?
If you installed or updated Claude Code via npm between 00:21 and 03:15 UTC on March 31, 2026, you may have pulled the compromised Axios version. Check your package-lock.json for axios@1.14.1 or the plain-crypto-js dependency.
Should I stop using npm for AI development tools?
Not necessarily, but you should implement proper controls. Pin exact versions, use npm ci, commit lockfiles, and consider using native installers for critical tools like Claude Code where available.
Were any AI model credentials specifically targeted?
Yes. The RAT exfiltrated environment variables, which typically contain API keys for OpenAI, Anthropic, and cloud providers. If these were accessible on a compromised machine, rotate them immediately.
How can my organization prevent this in the future?
Implement package cooling periods, verify build provenance, use isolated build environments, and run dependency audits before any package update reaches production systems.
Recommended Reading
- AI Security Implementation Guide
- Production Safeguards for AI Coding Agents
- AI Agents and Insider Threats
Sources
- Axios NPM Supply Chain Compromise: Malicious Packages Deliver Remote Access Trojan - SANS Institute
- North Korea-Nexus Threat Actor Compromises Widely Used Axios NPM Package - Google Cloud Threat Intelligence
The Axios attack demonstrates why AI engineers cannot treat their development environment as a trusted zone. Your toolchain is an attack surface. Treat it accordingly.
If you’re building AI systems and want to understand the security fundamentals that protect production deployments, join the AI Engineering community where we discuss real world implementation challenges including securing AI development workflows against supply chain attacks.