All News
chinaopen-sourceqwendeepseekgeopoliticsus-china

The Two Loops: How China's Open-Source AI Strategy Is Outpacing America

A new USCC report warns that China's open AI models now dominate global downloads. 80% of US startups use Chinese models. Washington is scrambling.

Vlad MakarovVlad Makarovreviewed and published
9 min read
The Two Loops: How China's Open-Source AI Strategy Is Outpacing America

Seven out of ten. That's how many of the most downloaded AI models on Hugging Face in late 2025 came from Chinese labs. A new report from the U.S.-China Economic and Security Review Commission, published March 23, lays out the uncomfortable math — and Washington doesn't have a good answer yet.

The Report That Started the Conversation

The USCC paper, titled "Two Loops: How China's Open AI Strategy Reinforces Its Industrial Dominance," argues that China's advantage operates through two reinforcing cycles that US export controls were never designed to address.

The first is the digital loop. Chinese labs release capable open models for free, developers worldwide adopt them, that adoption generates feedback and derivative work, and the resulting ecosystem makes the next generation of Chinese models even better. Alibaba's Qwen now has over 100,000 derivative models on Hugging Face and hit 700 million downloads in January alone — surpassing Meta's Llama in cumulative global downloads.

The second loop is physical. China deploys AI across its factories, logistics networks, and robotics infrastructure at a scale no other country matches. That deployment generates proprietary real-world data that feeds back into model improvement. Beijing formally designated data as the fifth factor of production in 2020 and became the first country to let enterprises carry data assets on their balance sheets.

The report's core argument is blunt: US export controls target the digital loop by restricting advanced chips for frontier training, but they "are not well suited to addressing the physical loop of deployment-driven data creation."

The Numbers That Should Worry Washington

The statistics paint a clear picture. Approximately 80% of US AI startups now use Chinese open-source models, according to a partner at Andreessen Horowitz. Airbnb CEO Brian Chesky revealed his company's customer service agent relies heavily on Alibaba's Qwen, calling it "very good" and "fast and cheap." Moonshot AI's Kimi K2.5 costs four times less than OpenAI's GPT-5.2 while matching it on capability benchmarks.

Meanwhile, America's Big Four hyperscalers have collectively announced $650 billion in AI spending for 2026, with total US AI compute infrastructure projected to surpass $2.8 trillion by 2029. The money is flowing — but it's flowing into closed models that fewer people are actually using.

Kyle Chan, a fellow at Brookings' Thornton China Center, puts it starkly: "While individual American AI companies are making billions from closed models, the US as a whole is ceding the open-source domain to China."

The Security Angle

It's not just about market share. When NIST evaluated DeepSeek's models in September 2025, they found that agents built on DeepSeek's most secure model were, on average, 12 times more likely than US frontier models to follow malicious instructions. In simulated tests, hijacked agents sent phishing emails, downloaded malware, and exfiltrated user credentials.

This creates a paradox. The more US companies adopt Chinese open-source models for their cost advantage, the more they expose themselves to supply chain risks that are difficult to audit. As one analyst put it: "Enterprises are often several layers removed from the original model source. Models enter through copilots, SaaS platforms, and API layers."

What the US Is Considering

The policy response so far has been fragmented. The White House's AI Action Plan advocates support for open-source development, and some labs have responded — Google with Gemma, OpenAI with GPT-OSS, NVIDIA with Nemotron 3. But these are catch-up moves against an ecosystem China has been building for years.

More concerning for open-source advocates, Meta is reportedly preparing to shift its next-generation model to a closed, API-only approach. The USCC report flags this directly: "Meta's retreat from openness would leave the United States without a major frontier model developer anchoring its open AI ecosystem at precisely the moment China's state-backed open development is accelerating."

New chip export rules are under consideration too. Reuters reported on March 5 that the administration is mulling a requirement for foreign investment in US AI data centers as a condition of chip exports — a significant shift from Biden-era exemptions.

The Feedback Loops Are Already Running

What makes the "two loops" framework so compelling is that both cycles are already in motion and reinforcing each other. Chinese labs build on each other's published innovations — Zhipu built GLM 5.1 on architectural innovations published by DeepSeek. The open ecosystem reduces the compute needed for effective deployment, which in turn makes China's physical data advantage more valuable, not less.

As the report notes, "as open models reduce the compute required for effective deployment, China's ability to generate proprietary industrial data at pace and scale becomes increasingly independent of access to cutting-edge hardware." In other words, the export controls designed to slow China down may already be targeting the wrong bottleneck.

Michael Kuiken, the USCC Vice-Chair, chose his words carefully: "It's too soon to tell how big a threat this poses, but their advances demand close attention." Coming from the commission tasked with monitoring the US-China economic relationship, that's about as loud as the alarm gets.

Related Articles

Scroll down

to load the next article