Jan-Code-4B: a small code-tuned model of Jan-v3
A 4B parameter model fine-tuned for coding tasks, designed as a drop-in replacement for Claude's Haiku.
The Jan team, led by Bach, has launched Jan-code-4B, a specialized 4-billion parameter language model fine-tuned for programming tasks. This release represents a focused experiment to enhance daily developer workflows with a model that remains small enough to run efficiently on local hardware. It is built upon the Jan-v3-4B-base-instruct base model and is explicitly positioned as a potential drop-in replacement for the Haiku model within the Claude Code ecosystem, aiming to provide a more accessible and locally-runnable option for coding assistance.
The model is optimized for core programming functions including code generation, editing, refactoring, basic debugging, and test writing. Early evaluations indicate it shows a slight performance improvement on standard coding benchmarks compared to its baseline and is reported to feel more reliable for code-specific prompts. It is distributed via Jan's own platform, Jan Desktop, and on Hugging Face in both standard and GGUF formats (compatible with tools like llama.cpp), with recommended inference parameters provided. This release underscores the growing trend of creating smaller, domain-specific models that trade general capability for efficiency and cost-effectiveness in targeted use cases.
- A 4-billion parameter model fine-tuned from Jan-v3-4B-base-instruct for coding tasks.
- Positioned as a drop-in replacement for Claude Code's Haiku model, optimized for local execution.
- Available on Hugging Face and via Jan Desktop, with specific recommended inference parameters (temp: 0.7, top_p: 0.8).
Why It Matters
It provides developers with a performant, locally-runnable coding assistant, reducing reliance on cloud APIs and lowering costs.