Media & Culture

Sarvam - 105B,First Indian open source model (trained from scratch)

The 105-billion parameter model is trained on Indian languages and costs 90% less to run.

Deep Dive

Indian AI startup Sarvam AI has made a significant entry into the global AI landscape with the release of its first model, a 105-billion parameter open-source large language model (LLM). This marks a major milestone as the first foundational model of this scale to be trained from scratch in India, moving beyond simply fine-tuning existing Western models. The launch represents a strategic effort to build sovereign AI capabilities tailored to India's unique linguistic and cultural context, challenging the dominance of US-based models like Meta's Llama series.

The model is specifically architected for the Indian ecosystem, with robust support for over 10 Indian languages and a focus on dramatic cost-efficiency. Sarvam claims its model can operate at up to 90% lower cost than comparable models, a critical factor for adoption in price-sensitive markets. Built on a custom AI stack, it is designed for both high-performance inference and scalable deployment. This release provides Indian developers and enterprises with a powerful, locally-relevant foundation for building generative AI applications, from customer service chatbots to content creation tools, without dependency on foreign AI infrastructure.

Key Points
  • First 105B parameter open-source LLM trained from scratch in India, not just fine-tuned.
  • Engineered for 90% lower operational costs compared to similar-scale models for market accessibility.
  • Native support for 10+ Indian languages, addressing a key gap in global model offerings.

Why It Matters

Provides a cost-effective, linguistically tailored AI foundation for India's massive digital economy and startups.