Why doesn't any OSS tool treat llama.cpp as a first class citizen?
A viral post demands llama.cpp get equal billing in dev tools, calling Ollama a 'scummy turncoat'.
A developer's frustrated post has gone viral in AI communities, calling out a perceived injustice in the open-source tooling ecosystem. The core complaint is that popular developer tools, such as OpenCode and VS Code Copilot extensions, consistently treat backends like Ollama and LM Studio as first-class citizens while ignoring llama.cpp. The author argues that from an engineering perspective, adding support should be trivial—often just a matter of allowing users to specify a local port for an OpenAI-compatible endpoint. Yet, this simple integration is frequently missing, forcing extra configuration work on users who prefer the llama.cpp inference engine.
The post escalates by taking direct aim at Ollama, labeling it a 'scummy turncoat' that has gained mindshare despite, in the author's view, not being a good member of the OSS community. This accusation taps into deeper community tensions about attribution, licensing, and ecosystem health. The developer contends that llama.cpp is now highly usable for the average developer and should be recognized as such by toolmakers. The call to action is clear: the post is a plea aimed directly at the developers of these tools to correct the oversight and provide equitable, label-agnostic backend support.
- Popular dev tools like OpenCode and VS Code extensions often lack first-class support for llama.cpp backends.
- The author argues integration is simple: just support OpenAI-compatible endpoints and let users specify a port.
- Ollama is criticized as a 'scummy turncoat' stealing mindshare from llama.cpp in the OSS ecosystem.
Why It Matters
This debate shapes which open-source inference engines get adopted, impacting developer workflow and ecosystem politics.