“Is Sarvam A DeepSeek Clone? Yes And No!”, Swarajya, March 07, 2026
“When Sarvam AI unveiled its new language model at India’s AI Impact Summit in New Delhi in February 2026, the Bengaluru-based startup had reason to feel confident. The model, called Sarvam-105B, had been trained from scratch on Indian soil using thousands of Nvidia graphics processors. It could reason in Sanskrit, parse legal documents in Tamil, and switch between Hindi and English mid-sentence — the way hundreds of millions of Indians actually speak.
On certain mathematical reasoning benchmarks, it matched or outperformed DeepSeek-R1, a Chinese model more than six times its size. Prime Minister Narendra Modi was photographed wearing the company’s prototype AI-powered smart glasses at the event.
The backlash arrived within hours. On X, formerly Twitter, critics dissected the model’s configuration file and declared Sarvam-105B a “scaled-down DeepSeek architecture clone.” One widely shared post ran the file through ChatGPT, which described it as a “Mini DeepSeek-V2 style model……..”
Read full article at swarajyamag.com

Sarvam.ai is a welcome and much needed development. Hats off to Dr. Vivek Raghavan and Dr. Pratyush Kumar and their team. I hope Sarvam text tokenizer also extends to include Brahui, Burushaski, and Tibetan languages and related variants as well as various levels of Tamil, Samskrit, Vedic, and other prakrit and apbhransh languages ( as much as possible). Trying to follow and analyze ancient Bhāratīya literature and inscriptions and development timeline layers is a very complex and daunting undertaking and now our indigenous AI models can greatly help our cultural and academic understanding and advancement.