DeepSeek AI Releases V4-Pro and V4-Flash Open-Source Models with 1M Context Length
1.6T parameters, MIT license, and prices 85% lower than GPT-5.5.
DeepSeek has released preview versions of its latest open-source AI models: DeepSeek V4-Pro (1.6 trillion parameters) and V4-Flash (284 billion parameters), both supporting a 1 million token context window. The models are available under an MIT license, making them fully open-source and downloadable from Hugging Face. This contrasts sharply with US labs like OpenAI, Anthropic, and Google that keep their frontier models proprietary. DeepSeek claims major gains in agentic tasks and coding, and integration with tools like Claude Code, OpenClaw, and OpenCode.
On benchmarks, DeepSeek V4 matches or approaches GPT-5.5 and Claude Opus 4.7 in several categories, though it lags slightly on popular leaderboards like Arena and Artificial Analysis. The most striking advantage is cost: DeepSeek V4 charges $1.74 per million input tokens and $3.48 per million output tokens, roughly one-sixth the price of GPT-5.5 ($5/input, $30/output) and Claude Opus 4.7 ($5/input, $25/output). This aggressive pricing, combined with open-source accessibility, positions DeepSeek to capture significant adoption in enterprise and developer markets, continuing China's push for AI dominance.
- Two models: V4-Pro (1.6T params) and V4-Flash (284B params), both with 1M token context window.
- MIT license allows free download, modification, and commercial use — unlike US frontier models.
- API pricing is 85% cheaper than GPT-5.5: $1.74/1M input tokens vs $5 for GPT-5.5.
Why It Matters
Open-source AI with frontier-level performance at 1/6th the cost could disrupt the entire AI market.