Chinese AI developer DeepSeek stated it spent $294,000 on training its R1 model, a figure significantly lower than those reported by its U.S. competitors. This information, published in an academic paper, is likely to restart the debate about China’s position in the race to develop artificial intelligence.
The rare update from the Hangzhou-based company—the first estimate of R1’s training costs it has ever released—appeared in a peer-reviewed article in the academic journal Nature on Wednesday.
In January, DeepSeek’s release of what it claimed were lower-cost AI systems prompted a global sell-off in tech stocks, as investors worried the new models could challenge the dominance of AI leaders like Nvidia. Since then, the company and its founder, Liang Wenfeng, have largely stayed out of the public eye, apart from a few new product updates.
The Nature article, which listed Liang as a co-author, stated that DeepSeek’s reasoning-focused R1 model cost $294,000 to train and used 512 Nvidia H800 chips. A previous version of the article, published in January, did not include this information.
Sam Altman, CEO of the U.S. AI giant OpenAI, said in 2023 that what he called “foundational model training” had cost “much more” than $100 million, although his company has not provided detailed figures for any of its releases. Training costs for the large-language models that power AI chatbots refer to the expenses incurred from running a cluster of powerful chips for weeks or months to process vast amounts of text and code.
Some of DeepSeek’s statements about its development costs and the technology it used have been questioned by U.S. companies and officials. The H800 chips it mentioned were designed by Nvidia specifically for the Chinese market after the U.S. in October 2022 made it illegal for the company to export its more powerful H100 and A100 AI chips to China.
In June, U.S. officials told Reuters that DeepSeek had access to “large volumes” of H100 chips procured after the U.S. export controls were put in place. At the time, Nvidia told Reuters that DeepSeek had used lawfully acquired H800 chips, not H100s.
In a supplementary information document accompanying the Nature article, the company acknowledged for the first time that it does own A100 chips and used them in the preparatory stages of development. “Regarding our research on DeepSeek-R1, we utilized the A100 GPUs to prepare for the experiments with a smaller model,” the researchers wrote. After this initial phase, R1 was trained for a total of 80 hours on the 512-chip cluster of H800 chips, they added.
Reuters has previously reported that one reason DeepSeek was able to attract top talent in China was that it was one of the few domestic companies operating an A100 supercomputing cluster.

