DeepSeek unveils next-gen AI model as Huawei vows ‘full support’ with new chips
DeepSeek's V4 models match US rivals at 1/10 the cost with Huawei chip support.
DeepSeek has finally released its next-generation foundational AI model, V4, as an open-source offering that the company claims is competitive with top US closed-source models from OpenAI and Google DeepMind. The Hangzhou-based startup released two versions: V4-pro with 1.6 trillion parameters (its largest ever) and V4-flash with 284 billion parameters. Both models feature a 1 million-token context window, a massive upgrade from the previous 128,000-token limit, achieved with what DeepSeek describes as "world-leading" cost efficiency. The V4-flash is priced identically to the V2 model from June 2024, making it one of the cheapest cutting-edge models on the market.
Huawei immediately announced "full support" for V4 models using its Ascend chips and supernode systems for inference, with more details promised in a livestream. AI chipmaker Cambricon also quickly declared compatibility. Analysts from Huatai Securities noted that V4's explicit mention of domestic chip compatibility signals a significant improvement in domestic GPU capabilities and widespread adoption this year. While V4-pro is too large for consumer hardware, its extended technical report on architecture and training techniques will benefit global AI developers, and the V4-flash offers a practical, cost-efficient alternative.
- V4-pro has 1.6 trillion parameters and a 1M-token context window, up from 128K in the previous model.
- Huawei pledged full support with Ascend chips for inference, and Cambricon also announced compatibility.
- V4-flash pricing matches DeepSeek's V2, making it one of the cheapest cutting-edge models available.
Why It Matters
Open-source AI now rivals US giants, boosted by domestic Chinese chips, reshaping global AI competition.