Show HN: GoModel – an open-source AI gateway in Go; 44x lighter than LiteLLM
Open-source gateway unifies 11+ LLM APIs with a 44x smaller footprint than popular alternatives.
Enterpilot has released GoModel, a high-performance, open-source AI gateway written in Go that consolidates access to over 11 major large language model providers under a single, OpenAI-compatible API. The gateway supports OpenAI, Anthropic's Claude models, Google's Gemini, xAI's Grok, Groq, OpenRouter, Z.ai, Azure OpenAI, Oracle, and Ollama, automatically detecting which providers are available based on supplied credentials. Its most striking technical claim is a 44x reduction in resource footprint compared to the popular Python-based LiteLLM gateway, making it significantly more efficient for production deployments.
Deployment is streamlined through Docker, requiring only environment variables for API keys to get started. GoModel supports core LLM operations including chat completions with streaming, the newer OpenAI /responses endpoint, text embeddings, file uploads and management, and batch processing where supported by the underlying provider. The project also includes optional infrastructure components like Redis, PostgreSQL, and Prometheus for monitoring, and can be run from source with Go 1.26.2+.
The gateway implements a provider passthrough feature, allowing direct access to provider-specific endpoints via /p/{provider}/... paths, while maintaining a unified front-end for common operations. This architecture lets development teams standardize on one API interface while easily switching or load-balancing between different LLM backends. The open-source release includes comprehensive documentation, Docker Compose profiles for different deployment scenarios, and clear warnings about securing API keys in production environments.
- Unified API for 11+ LLM providers including OpenAI, Anthropic, Gemini, and Groq with automatic provider detection
- 44x lighter resource footprint than LiteLLM, written in Go for high-performance deployments
- Supports chat completions, embeddings, files, batches, and provider passthrough with Docker one-command deployment
Why It Matters
Drastically simplifies multi-LLM application development while reducing infrastructure costs and operational complexity.