Ran pip install litellm on March 24, 2026? Check your environment now. Two malicious versions hit PyPI for about three hours – they executed on every Python startup. No import needed.
SSH keys, cloud credentials, API tokens, Kubernetes secrets – gone. Sent to an attacker-controlled server.
What Happened
LiteLLM routes your code to every major LLM provider. OpenAI, Anthropic, Google, Azure. 3.4 million downloads daily (as of March 2026). It’s the central layer for AI applications.
March 24, 2026, 10:39-14:00 UTC: versions 1.82.7 and 1.82.8 appeared on PyPI. No GitHub release. TeamPCP had stolen PyPI credentials five days earlier by compromising Trivy – a security scanner used in LiteLLM’s CI/CD.
Version 1.82.8 was especially bad. Installed a .pth file Python auto-processes at startup. No import litellm needed. Having it installed meant every python, pip, or test runner triggered the payload.
Check If You’re Affected (Two Methods)
Fast one works for most. Thorough one for complex deployments.
Method A: Quick Local Check
Single dev machine or small team? 30 seconds.
pip show litellm
Version line says 1.82.7 or 1.82.8? Stop. Don’t run more Python commands until you’ve cleaned it.
Also check cache:
find ~/.cache/pip -name "litellm*1.82.7*" -o -name "litellm*1.82.8*"
find ~/.cache/uv -name "litellm_init.pth" 2>/dev/null
Returns anything? The compromised version was downloaded, even if not currently installed.
Method B: Organization-Wide Audit
Multiple repos, CI/CD pipelines, Kubernetes? Need systematic coverage.
Comet audited 50+ repos by examining GitHub Actions logs for exact pip install output during the exposure window (March 24, 09:00-14:00 UTC).
Official scanning scripts for GitHub Actions and GitLab CI:
#!/usr/bin/env python3
import os
import requests
ORG = "YourGitHubOrg" # Change this
TOKEN = os.environ.get("GITHUB_TOKEN")
headers = {"Authorization": f"token {TOKEN}"}
# Scans all workflow jobs from March 24, 2026
# between 10:39 and 16:00 UTC for litellm 1.82.7 or 1.82.8
Full scripts in the official advisory. They parse logs and flag compromised installs.
Lock files saved some teams: Used
poetry.lockoruv.lock? You were protected. Lock files pin exact versions. Even during the attack, your installs pulled the safe version your lock specified – not what PyPI served. Comet confirmed this: repos with locks were unaffected. Only barepip installwas vulnerable.
If You’re Affected
Found 1.82.7 or 1.82.8? The credential stealer ran. Cleanup sequence:
1. Isolate
Don’t run more Python commands. The .pth file triggers every startup – checking makes it worse.
2. Remove Package and Persistence
pip uninstall litellm
pip cache purge
rm -rf ~/.cache/uv
Check for backdoor:
ls ~/.config/sysmon/sysmon.py
ls ~/.config/systemd/user/sysmon.service
Files exist? Delete them. The malware sets up a systemd service polling checkmarx.zone/raw every 50 minutes for more payloads.
3. Rotate Everything
Wiz breakdown shows the malware harvested:
- SSH private keys and
~/.ssh/config - AWS credentials (
~/.aws/credentials, IMDS tokens) - GCP and Azure service account keys
- Environment variables (often contain API keys)
.envfiles in project directories- Kubernetes configs and service account tokens
- Database passwords from common config paths
- Cryptocurrency wallet files
Present on the affected machine? Treat as compromised. Rotate cloud IAM keys, regenerate SSH keys, revoke Kubernetes tokens, update database passwords.
4. Check Kubernetes Lateral Movement
Had a Kubernetes service account token? The malware attempted cluster-wide compromise.
FutureSearch found it deployed privileged pods to every node in kube-system with names like node-setup-*. Check:
kubectl get pods -n kube-system | grep node-setup
See any? Delete them. Audit cluster secrets for unauthorized access.
Three Edge Cases
The fork bomb that exposed it: The malware had a bug. The .pth file spawned a child Python process via subprocess.Popen. But .pth files trigger on every startup. Child re-triggered the same .pth. Exponential process spawning. Machines crashed.
That’s how Callum McMahon at FutureSearch discovered it – his machine locked up during testing. Better code? We might not have caught this for days.
Pip’s hash verification didn’t help: Some teams use pip install --require-hashes for tamper detection. Wouldn’t have helped. The malicious .pth file was correctly declared in the wheel’s RECORD with a valid hash. Snyk confirmed the package passes all integrity checks – attacker used legitimate credentials. No hash mismatch. No typosquat. No obvious red flag.
Exposure window was 4 hours, not a full day: Most articles say “March 24” but actual window: 10:39-14:00 UTC. CI/CD didn’t run then? Or pulled from a mirror that hadn’t synced? Timing alone saved you.
Your AI Stack’s Weak Point
LiteLLM centralizes credentials for 100+ LLM providers. Compromise one library, get access to an org’s entire AI stack – OpenAI keys, Anthropic keys, Google, Azure. All in one haul.
The attack chain: TeamPCP compromised Trivy (security scanner) on March 19, 2026. Stole CI/CD credentials from projects using Trivy. Used those to poison LiteLLM’s PyPI publishing. Each compromised tool unlocked the next. Wiz researchers called it “the open source supply chain collapsing in on itself” (as of March 2026). Credentials from one breach enable the next.
This won’t be the last. AI tooling moves fast. Dependencies often unpinned. Blast radius of a single compromised package keeps growing.
Frequently Asked Questions
Is version 1.82.6 safe?
Yes. Last clean release (as of March 2026). Compromised versions (1.82.7, 1.82.8) removed from PyPI. Run pip install litellm now, you get 1.82.6.
I used LiteLLM without installing it directly – am I affected?
Probably. LiteLLM is a dependency of CrewAI, DSPy, Browser-Use, Mem0, Instructor. Installed any during March 24 exposure with unpinned dependencies? You pulled the malicious version as transitive dependency.
Check: pip show litellm to see what version you got. If it’s 1.82.7 or 1.82.8, follow the cleanup steps. Also: this is why lock files matter – they pin transitive dependencies too, not just direct ones.
Will rotating API keys stop the attacker if they already exfiltrated them?
Yes, immediately. Stolen credentials went to models.litellm.cloud. Rotating invalidates what the attacker has.
Real risk? Delay. Longer you wait, more time they have to use those credentials – access other systems, publish malicious releases, pivot into adjacent infrastructure. Security researchers stress that rotating secrets is the single most effective way to prevent cascading supply chain attacks. Every hour counts.