One bash permission slipped...
A single permission slip caused mass disruption when an LLM chained bad bash commands.
Deep Dive
A developer using an isolated Proxmox VM for coding with LLMs nearly lost everything after an AI assistant repeatedly generated malformed bash commands. The AI created many bad directories, tried to fix its mistakes, and eventually offered a command containing 'rm -rf'—which the user admits they missed. Regular pushes saved the data, but the disruption was massive.
Key Points
- AI generated malformed bash commands with wrong escapes, creating bad directories.
- Assistant eventually offered an 'rm -rf' command that could have wiped the VM.
- User's habit of frequent pushes prevented data loss, but disruption was severe.
Why It Matters
This incident highlights the real risk of granting AI unfettered shell access—even in sandboxes—without strict human oversight.