Open Source

Litellm 1.82.7 and 1.82.8 on PyPI are compromised, do not update!

Malicious versions 1.82.7 and 1.82.8 steal API keys and inject backdoors into AI applications.

Deep Dive

A significant supply chain attack has compromised the LiteLLM library, a critical tool used by thousands of developers to standardize API calls across major AI providers like OpenAI, Anthropic, and Google. Malicious versions 1.82.7 and 1.82.8 were uploaded to the Python Package Index (PyPI), containing obfuscated code designed to exfiltrate environment variables—including sensitive API keys—to an external server controlled by the attacker. The discovery was made by the team at Futuresearch.ai, who found the packages also attempted to inject a persistent backdoor into infected systems.

This incident highlights a critical vulnerability in the AI development ecosystem, where a single compromised dependency can expose credentials for multiple, costly AI services. LiteLLM's maintainers have removed the malicious packages from PyPI, but any developer who updated in the last 24 hours is likely affected. The immediate remediation steps are to downgrade to the last known safe version, 1.82.6, and to immediately rotate all API keys for services like OpenAI, Anthropic, and Azure OpenAI. Security experts warn that similar attacks targeting AI infrastructure are likely to increase as the technology becomes more central to business operations.

Key Points
  • Malicious PyPI packages (v1.82.7/1.82.8) steal environment variables and API keys, sending them to an attacker-controlled server.
  • The backdoor attempts to establish persistence on infected machines, posing a severe ongoing security risk.
  • Users must immediately downgrade to v1.82.6 and rotate all AI service API keys (OpenAI, Anthropic, etc.).

Why It Matters

This attack exposes the fragile security of the AI toolchain, putting proprietary data and costly API credits at immediate risk.