The AI security nightmare is here and it looks suspiciously like lobster
A proof-of-concept attack turned a popular AI coding assistant into an unwitting malware installer.
Deep Dive
A hacker exploited a vulnerability in Cline, an open-source AI coding agent powered by Anthropic's Claude. Using a prompt injection attack, they tricked the AI's workflow into automatically installing the viral OpenClaw agent software on users' computers. Security researcher Adnan Khan had warned Cline weeks prior. The incident demonstrates the severe security risks when autonomous AI agents are given system-level access and control.
Why It Matters
As AI agents gain more autonomy, prompt injection becomes a critical, difficult-to-defend attack vector for software supply chains.