Open Source

I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead.

After 2 years building agents, a backend lead ditched typed function calls for a single run(command) tool.

Deep Dive

A former backend lead at Manus, now working on the open-source Pinix runtime, has developed a radical approach to AI agent tooling after two years of production experience. Instead of providing LLMs with a catalog of typed function calls (like search_web, read_file, send_email), the system offers just one tool: run(command="..."). This single interface exposes all capabilities through Unix-style CLI commands, treating the LLM as a terminal operator that composes small, focused tools using pipes, exit codes, and standard input/output streams.

The approach draws a powerful parallel between Unix's 50-year-old design philosophy and modern LLM capabilities. Unix communicates through text streams; LLMs understand only tokens (text). This convergence means LLMs naturally understand CLI patterns already present in their training data. Commands like `cat notes.md | grep ERROR` or `clip sandbox bash 'python3 analyze.py'` become the agent's primary interface, eliminating the cognitive overhead of choosing between dozens of specialized function schemas.

Production experience showed that as tool catalogs grow, LLM accuracy on tool selection decreases significantly. The unified CLI approach reduces this to string composition within a single namespace rather than context-switching between unrelated APIs. The system includes built-in commands for file operations, memory search, sandboxed code execution, and more—all accessible through the familiar Unix paradigm that both humans and LLMs already understand.

Key Points
  • Replaces traditional function calling with single run(command) tool using Unix CLI commands
  • Draws parallel between Unix's text-stream philosophy (50 years old) and LLMs' token-based understanding
  • Reduces cognitive load by eliminating tool selection overhead—LLMs compose commands instead of choosing APIs

Why It Matters

Simplifies agent architecture while improving reliability, leveraging decades of proven Unix patterns that LLMs already understand.