Skip to content
Home / Showcase / DeepSeek
Foundation Models Series A

DeepSeek

The Chinese lab that broke the compute-moat story for frontier AI.

deepseek.com ↗
◆ Profile
DeepSeek
deepseek.com
Founded
2023
HQ
Hangzhou, China
Valuation
$10B+ (in talks, Apr 2026); offers reportedly up to $20B
Total raised
Previously all-internal from High-Flyer Capital; $300M+ first external round in progress
Revenue run-rate
Not disclosed
Team
~200
◆ The take

DeepSeek is the most important company to understand when thinking about the future of AI geopolitics. R1 wasn't just a model release — it was the moment the 'only US labs can build frontier models' assumption cracked publicly. The company is also a proof that capital discipline and algorithmic innovation can partially substitute for the $10B+ training-compute budgets that US labs increasingly depend on. For investors in the US, DeepSeek is the single biggest reason to be skeptical of narratives built around compute-moat defensibility.

◆ Why it works

What's going for them.

  1. 01
    DeepSeek R1 (Jan 2025) was the moment the 'US compute moat' narrative broke — a reasoning model competitive with OpenAI o1, reportedly trained for ~$5.6M using H800 chips despite export controls.
  2. 02
    V4 launching in late April 2026 — trillion-parameter mixture-of-experts architecture with 1M-token context window. First Chinese model to match frontier US labs on capability axes, not just benchmarks.
  3. 03
    Open-weight release strategy that has made DeepSeek the default starting point for self-hosted AI outside the US and Europe, and a Hugging Face top-downloaded-models fixture.
  4. 04
    Parent-company funding (High-Flyer Capital, a quant hedge fund) gave DeepSeek capital discipline that most US labs never needed to develop — which shows in the cost-per-capability efficiency.
  5. 05
    First external fundraise in progress at $10B+, with reported offers at $20B — pricing that validates the company as China's single most important foundation-model lab, not a regional competitor.

What they built

DeepSeek ships foundation models optimized for reasoning, code, and efficient inference — with a research culture that prioritizes architectural innovation over raw scale. The model family spans DeepSeek-V3 (general-purpose), DeepSeek-R1 (reasoning), DeepSeek-Coder, and the upcoming V4 (trillion parameters, 1M context). The open-weight release pattern has built an enormous developer community that deploys DeepSeek models in environments where OpenAI and Anthropic APIs are inaccessible or undesirable.

How they got here

Liang Wenfeng founded DeepSeek in 2023 as a spin-out from High-Flyer Capital, the Chinese quant hedge fund he also founded. The unusual founding structure — parent-funded, with no external VCs for the first two years — gave DeepSeek the freedom to prioritize research quality over GTM velocity. The team stayed small (~200), the research output stayed disproportionate, and the model releases built a compounding reputation.

The inflection was January 2025. DeepSeek-R1 matched OpenAI’s o1 on reasoning benchmarks at a tiny fraction of the training cost, and the open-weight release let anyone verify the result. US equity markets moved on the news — a single Chinese open-source release caused a notable correction in AI infrastructure stocks. Through 2025, DeepSeek shipped V3, R1.5, Prover, and a steady cadence of research papers that tracked the frontier closely. V4 launches in late April 2026 with a trillion-parameter MoE architecture and 1M-token context.

As of April 2026, DeepSeek is running its first external fundraise — $300M+ at a valuation of at least $10B, with reported offers up to $20B from Tencent, Alibaba, and others. The shift from fully internal funding to an open cap table signals the company is preparing for a longer, more capital-intensive phase.

What’s ahead

Three questions dominate. First, V4 and beyond: can DeepSeek maintain capability parity with US labs as training runs cross $1B in compute cost? The architectural-efficiency thesis is credible but unproven at the frontier. Second, geopolitics: US export controls have tightened through 2025 and 2026; DeepSeek’s continued access to compute depends on either domestic Chinese alternatives (Huawei Ascend) reaching parity or smuggled inventories holding out. Third, commercial model: DeepSeek has never optimized for enterprise revenue the way US labs have; the external fundraise signals that’s changing.

Why it matters

DeepSeek is the single most consequential AI company outside the US — and the one whose trajectory will most shape whether AI converges toward a US-dominated oligopoly or a multipolar landscape. For AI founders globally, DeepSeek’s story is a proof that a small team with strong research discipline can genuinely compete at frontier. For investors, understanding DeepSeek is how you avoid being caught on the wrong side of the compute-moat narrative.

◆ Conversations

Founder interview coming soon.

We'll be sitting down with the founders and operators of the companies we profile — on fundraising, product decisions, and what they're building next. If you're part of the DeepSeek team and want to share a perspective, get in touch.

◆ Notable customers
BytedanceBaiduTencent Cloud (reseller)wide Chinese enterprise uptake via APIHugging Face top-downloaded-models regular globally

Thinking about fundraising or M&A?

Amafi Advisory works with AI companies on strategic, fundraising, M&A, and technical advisory. Even if you're just exploring.