Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

AI chipmaker Cerebras says it's been 'crushed with demand' for China's DeepSeek from business customers

(Credit: Photograph by Stuart Isett/Fortune)

The upstart AI chip company Cerebras has started offering China’s market-shaking DeepSeek on its U.S. servers.

Cerebras makes uncommonly large chips that are particularly good at speedy inference—that is, running rather than training AI models. The company claims that its hardware runs the midsize 70-billion-parameter version of DeepSeek’s R1 model 57 times as fast as what can be achieved on the fastest GPUs.

Andrew Feldman, Cerebras’s CEO, told Fortune Thursday that there was huge enthusiasm among his enterprise customers for DeepSeek, which is proving disruptive because it matches the most cutting-edge “reasoning” models from companies such as OpenAI while costing far less to train and use (although there are still many questions about the extent of the cost savings that DeepSeek has touted).

“We were crushed with demand,” Feldman said of the 10 days since DeepSeek released R1.

In a demonstration of the speed of Cerebras’s DeepSeek service, Feldman prompted the model to write a program implementing chess in the Python programming language. It took 1.5 seconds, while OpenAI’s o1-mini reasoning model took 22 seconds to complete the same task on traditional GPUs. (Unlike DeepSeek and Meta’s Llama series, OpenAI’s models are proprietary and not available for Cerebras to use, so an apples-to-apples comparison on Cerebras’s hardware was not possible.)

Feldman highlighted DeepSeek’s superior performance in math and coding tasks, compared with o1. “You get better answers, faster” on some types of tasks, he said.

However, there are significant concerns over DeepSeek’s safety and inherent biases. Certainly when using R1 through DeepSeek’s official app or web interface, the model censors itself when asked about topics deemed sensitive to Beijing. It has also proved vulnerable to manipulation, making it possible to use the model to generate things like bomb-making instructions. The U.S. Navy has ordered its staff to stay away.

Like American peers such as rival chipmaker Groq and AI “answer engine” Perplexity, which have also started offering DeepSeek to their users in the past few days, Cerebras is hoping U.S.-based hosting will alleviate some of those concerns for enterprise users.

“If you use [DeepSeek’s] app, which is now the most popular app in the world, you are sending your data to China,” Feldman said. “Don’t do that. You can use the LLM served by U.S. companies in the U.S. like us, Perplexity, and others.”

Cerebras, which sells its specialized AI chips and offers AI services to customers through the cloud, filed paperwork in September to list its shares on the Nasdaq in an initial public offering.

While Feldman acknowledged that some of the problems with DeepSeek are very real, he argued that users just need to show common sense. In this case, he suggested, people should use the model while understanding that it has a Chinese worldview embedded into it and is more reliable for some tasks than others.

“When you use a chain saw, you should probably wear steel-toed boots and safety goggles,” Feldman quipped. “I don’t think that means you don’t use chain saws; it means you use them carefully.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.