Just when the global AI race seemed to be settling into a predictable rhythm, DeepSeek v4 does it again. On Friday, April 24, 2026, the Chinese AI startup dropped preview versions of its long-awaited DeepSeek V4 — and the early details are, to put it mildly, remarkable. For anyone who watched the original R1 release send shockwaves through global tech markets in January 2025, this moment has a familiar charge to it.
This is the model the industry has been tracking for months. And it’s finally here.
What Is DeepSeek V4?
DeepSeek V4 is the Chinese startup’s first major ground-up model release since its R1 model rattled the industry in early 2025. Rather than a single model, the company launched two variants simultaneously: DeepSeek-V4-Pro and DeepSeek-V4-Flash.
The Pro version is the flagship — a behemoth with 1.6 trillion parameters, making it one of the largest open-source language models ever released. The Flash version is leaner at 284 billion parameters, designed for speed and efficiency. Both models come with a 1 million token context window, an enormous leap that means these models can process entire codebases, lengthy legal documents, or book-length research in a single pass.
Both are open-source under permissive licenses, continuing DeepSeek’s tradition of making its frontier models freely available for developers to download, run locally, and modify. That decision alone is a major geopolitical and commercial statement.
What Can It Do?
DeepSeek is making bold claims about V4-Pro’s performance. The company says the model beats all rival open models on mathematics and coding benchmarks, trailing only Google’s closed-source Gemini 3.1-Pro on world knowledge tasks. If those benchmarks hold up to independent scrutiny, V4-Pro would represent the most capable openly available model in existence.
The focus on coding is no accident. Reporting from earlier this year indicated that DeepSeek was specifically targeting code generation and complex, long-context coding prompts as V4’s headline capability. Pre-release leaks had suggested the model could score around 81% on SWE-bench Verified — the gold standard for software engineering tasks — compared to 67.8% for its predecessor. That would put it ahead of every current open-weight model on real-world software engineering challenges.
The context window upgrade is equally significant. One million tokens doesn’t just mean longer conversations. It means V4 can reason across entire repositories, analyze massive datasets in one shot, and maintain coherence over document lengths that would have been technically impossible just a year ago.
One notable limitation at launch: both V4 models are text-only for now. DeepSeek confirmed it is actively working on multimodal capabilities that would allow the models to process images and video, but those features aren’t live in the preview release.
The Hardware Question
One of the most politically charged aspects of DeepSeek V4 is the hardware story behind it. Huawei confirmed on Friday that its Ascend supernode — powered by Ascend 950 AI chips — fully supports DeepSeek’s V4 models. Reports also indicate that DeepSeek gave Huawei early hardware access for optimization while deliberately not sharing the model with Nvidia or AMD engineers.
This is a direct response to U.S. export controls restricting Chinese AI developers from accessing Nvidia’s most advanced chips. Rather than working around those restrictions quietly, DeepSeek appears to have leaned into domestic alternatives — and produced a frontier model that, at least according to their own benchmarks, doesn’t seem to have suffered for it.
Whether Huawei’s Ascend chips did the heavy lifting or whether Nvidia hardware still played a significant role in training remains unclear. But the symbolism is hard to miss: one of the world’s most capable open-source AI models was built, at least in part, on Chinese silicon.
Why This Matters
Let’s rewind. When DeepSeek released R1 in January 2025, it triggered a $1 trillion wipeout in global tech stocks — with Nvidia alone shedding $600 billion in market value in a single day. The reason? R1 offered performance comparable to models from OpenAI and Google at a fraction of the cost. Investors suddenly questioned whether the enormous infrastructure buildout powering Western AI giants was necessary at all.
DeepSeek V4 arrives in a different climate — one shaped partly by the lessons of that shock. But the core question it raises is the same: How much does raw compute spending actually buy you?
If V4-Pro delivers on its benchmark claims, it will once again challenge the assumption that frontier AI requires closed ecosystems, export-controlled chips, and multibillion-dollar training runs. It will also intensify pressure on companies like OpenAI, Anthropic, and Google to justify the premium they charge for closed-source access.
There’s also the business angle. Reports published days before the V4 launch revealed that Tencent and Alibaba were in talks to invest in DeepSeek at a valuation exceeding $20 billion — a signal that China’s broader tech establishment is rallying behind the startup as a strategic national asset.
What Comes Next?
The preview tag on today’s release matters. These are not fully production-ready models, and independent benchmarks will tell a more complete story than DeepSeek’s own announcements. The multimodal capabilities still in development will be a key feature to watch — vision and video processing are increasingly table stakes for enterprise AI applications.
For developers, the immediate practical question is API availability. Based on DeepSeek’s established pattern with earlier models, international API access through api.deepseek.com is expected to continue, but hasn’t been officially confirmed for V4 yet.
For the rest of us, DeepSeek V4 is a reminder that the global AI race is not a two-horse contest between Silicon Valley giants. A Chinese startup, working under significant hardware restrictions, has once again built something that demands serious attention.
The “Sputnik moment” framing was always a bit dramatic. But DeepSeek keeps making it feel apt.