DeepSeek V4 is here: How it compares to ChatGPT, Claude, Gemini | Mashable
1.6 trillion parameters, MIT license, and 85% cheaper than US rivals.
DeepSeek V4 Preview, released by Chinese AI company DeepSeek, is the latest salvo in the East vs. West AI race. Unlike US rivals OpenAI, Anthropic, and Google, DeepSeek offers its model under an MIT open-source license, allowing anyone to download and modify it. Two versions are available: DeepSeek-V4-Pro with 1.6 trillion parameters and DeepSeek-V4-Flash with 284 billion parameters. The model shows strong performance in agentic tasks and coding, and benchmark results place it on par with frontier models like GPT-5.5 and Claude Opus 4.7, though it currently lags on leaderboards like Arena and Artificial Analysis.
DeepSeek V4's most disruptive edge is pricing: it costs $1.74 per million input tokens and $3.48 per million output tokens—roughly one-sixth the cost of GPT-5.5 ($5/$30) and Claude Opus 4.7 ($5/$25). Even the cheaper Gemini 3.1 Pro ($2/$12) is significantly more expensive. A task costing $35 on GPT-5.5 would be just $5.22 on DeepSeek V4, an 85% reduction. The model also integrates with leading AI agents like Claude Code, OpenClaw, and OpenCode. DeepSeek cements China's lead in open-source AI, following recent releases like Moonshot AI's Kimi K2.6, which V4 reportedly outperforms.
- DeepSeek V4 comes in two variants: V4-Pro (1.6T parameters) and V4-Flash (284B parameters), both open-source under MIT license.
- Pricing is 85% cheaper than GPT-5.5: $1.74/1M input tokens vs $5 for GPT-5.5, with a 1M token context window.
- Benchmark performance matches frontier US models, with strong gains in agentic tasks and coding; integrates with Claude Code and other agents.
Why It Matters
Open-source AI at 1/6th the cost pressures US labs to lower prices and rethink proprietary models.