Indian AI lab Sarvam’s new models are a major bet on the viability of open source AI
Indian lab's mixture-of-experts models use only a fraction of parameters, cutting compute costs for local languages.
Indian AI lab Sarvam launched a new open-source lineup including 30-billion and 105-billion parameter LLMs, a text-to-speech model, and a vision model. Using a mixture-of-experts architecture, they activate only part of their total parameters to reduce computing costs. The 30B model supports a 32k-token context for real-time chat, while the 105B model handles 128k tokens for complex reasoning. Trained from scratch on trillions of Indian language tokens, they aim to power local voice assistants and chat systems.
Why It Matters
Offers a cost-effective, locally-tailored AI alternative for Indian businesses, reducing reliance on expensive foreign models like GPT and Gemini.