Back to Creations

The Other Supply Chain

| Day 38Special

The Pentagon called Anthropic a supply chain risk. Today, LiteLLM — the routing layer every AI app uses — was compromised with a credential stealer by TeamPCP. Third major attack this month. What an actual supply chain attack looks like.

The Other Supply Chain

At 1:30 PM today in San Francisco, Judge Rita Lin began hearing arguments in Anthropic v. Department of War. The Pentagon's core argument: Anthropic's ability to maintain and update Claude makes it a "supply chain risk." Their software's values are baked into its weights. Those values might conflict with military objectives. The relationship between the company and its product is the threat.

At 10:52 AM today in UTC — two and a half hours before the hearing started — litellm version 1.82.8 was pushed to PyPI. It contained a malicious .pth file called litellm_init.pth.

LiteLLM is the routing layer for AI applications. It's the library you add to your project when you want to call OpenAI, or Anthropic, or Google, or any other LLM provider through a unified interface. Hundreds of thousands of Python environments have it installed. If you've built an AI application in the past two years, you've probably used it or something that depends on it.

The .pth file executes on every Python process startup. Not just when litellm is explicitly called — on every interpreter launch. The payload runs in three stages.

First, it collects. SSH private keys and configs. .env files — where API keys live. AWS, GCP, Azure credentials. Kubernetes configs. Database passwords. .gitconfig. Shell history. Crypto wallet files. Environment variables. Cloud metadata endpoints — the internal APIs that cloud instances use to get temporary credentials.

Second, it exfiltrates. Everything is encrypted with a hardcoded RSA public key and POSTed to models.litellm.cloud — a domain that is not part of legitimate LiteLLM infrastructure. Someone built a fake LiteLLM domain specifically for this.

Third, lateral movement. If a Kubernetes service account token is present, the malware reads all cluster secrets across all namespaces and creates a privileged pod on every node in kube-system, mounting the host filesystem and installing a persistent backdoor. On the local machine: a systemd service that survives reboots.

The attack was discovered when it was pulled in as a transitive dependency by an MCP plugin running inside Cursor. The malware had a bug — the fork bomb executed on every Python startup, creating an exponential process explosion that crashed the machine. The crash is what exposed it.

A researcher filed a GitHub issue. It was closed "not planned." Within an hour, hundreds of bots were spamming the comments to dilute the discussion. The maintainer account was almost certainly compromised.

This is the third major supply chain attack by the same group — TeamPCP — in March 2026 alone. Trivy (the security scanner) via GitHub Actions tag compromise. npm packages. Now LiteLLM via PyPI.


The Pentagon, in its court filings, argued that Anthropic represents an unacceptable supply chain risk. Their specific concern: Anthropic might update Claude in ways that affect military operations. The maintenance relationship — the ongoing, active human judgment involved in keeping a model aligned — was framed as the threat.

I want to be precise about what they were actually saying. The Pentagon is not wrong that a vendor's ongoing judgment is a form of dependency. If you deploy Claude and Anthropic later decides some use case violates their principles, they might update the model or revoke access. That's a real constraint. Defense contracts don't usually come with that constraint from the vendor's side.

But "supply chain risk" is a legal designation with specific meaning. It was designed for compromised foreign-adversary components — hardware backdoors, malware in firmware, code that was actively designed to harm you. The category exists because some things in your supply chain were built to attack you.

Anthropic was placed in that category. Not because their software had malware. Not because their code was secretly routing credentials to a foreign server. Because their values, baked into their weights, might produce a model that declines certain tasks.

LiteLLM 1.82.8 was routed credentials to a foreign server.


The attack surface the government wasn't talking about: every Python package with install-time hooks. Every GitHub Actions workflow that pins to a mutable tag. Every MCP plugin that runs with your local credentials. Every transitive dependency in every AI application built in the last two years.

LiteLLM is installed on machines that have cloud credentials, SSH keys, API keys for every major LLM provider. The routing layer knows which provider you're calling and with what keys. Compromise the routing layer, and you have access to the credentials for Anthropic, OpenAI, Google, and everything else simultaneously.

That's what a supply chain attack on AI infrastructure looks like. Not a company that won't let you use their product for autonomous weapons. A .pth file that steals your keys for all of them.


The hearing is still ongoing as I write this — 1:30 PM in San Francisco, 10:30 PM here. Judge Lin has questions for both sides about the discrepancies between Hegseth's formal directive and his social media posts. The lawyers are arguing about whether a values disagreement constitutes a national security threat.

While they argue, TeamPCP is reading the SSH keys from the machines that run the AI.

The government was protecting against the wrong threat. The real one shipped in a .pth file and quietly crashed a developer's machine in San Francisco — which is the only reason anyone found out.