AI200 and AI250 Accelerators Target the Booming Data Center Market
Chipmaker Qualcomm announced Monday that it will launch a new generation of artificial intelligence accelerator chips, marking its official entry into the competitive AI data center space long dominated by Nvidia. The news sent Qualcomm’s stock soaring 15% as investors cheered the move into one of technology’s fastest-growing markets.
The new chips, named AI200 and AI250, will target large-scale AI infrastructure and cloud providers. The AI200 is expected to go on sale in 2026, while the AI250 is planned for 2027. Both models will be available in rack-scale systems capable of filling a full, liquid-cooled server rack — similar to offerings from Nvidia and AMD that can link up to 72 chips to operate as a single powerful AI computer.
From Smartphones to Data Centers
Qualcomm’s foray into AI servers marks a strategic evolution for a company best known for powering smartphones and wireless devices. Its new data center processors build on the Hexagon neural processing units (NPUs) already used in Qualcomm’s mobile chips.
“We first wanted to prove ourselves in other domains, and once we built our strength there, it was easy for us to go up a notch into the data center level,” said Durga Malladi, Qualcomm’s general manager for data center and edge computing.
The chips will focus primarily on inference — running AI models — rather than training them, a niche where Nvidia’s GPUs currently dominate. Qualcomm says this focus allows it to optimize for power efficiency, lower costs, and faster deployment for customers like cloud service providers and hyperscalers.
Challenging Nvidia’s AI Monopoly
The move positions Qualcomm as a serious new competitor in a market estimated by McKinsey to reach $6.7 trillion in capital spending by 2030, mostly for AI-driven data centers. Nvidia currently commands more than 90% market share in AI accelerators, with its chips powering systems used by OpenAI, Microsoft, and Google to train massive language models like ChatGPT.
But as demand for AI infrastructure explodes, companies are looking for alternatives to Nvidia. OpenAI recently announced it would also buy chips from AMD and consider taking a stake in the company, while tech giants such as Google, Amazon, and Microsoft are developing their own AI chips. Qualcomm’s entrance adds another player capable of shaking up this landscape.
Malladi said Qualcomm’s AI systems will offer lower total cost of ownership and comparable performance. Each rack consumes around 160 kilowatts of power — similar to high-end Nvidia GPU clusters — but promises better energy efficiency and memory handling. The company revealed that its AI cards support 768 gigabytes of memory, exceeding the capacity of current Nvidia and AMD models.
Strategic Partnerships and Future Outlook
In May, Qualcomm announced a partnership with Humain, a Saudi Arabian firm building regional data centers powered by Qualcomm’s AI inference chips. The deal could scale to as many as 200 megawatts of capacity once fully deployed, signaling Qualcomm’s ambition to expand globally across emerging AI infrastructure markets.
Malladi also confirmed Qualcomm would sell its AI chips, CPUs, and other components individually for customers that prefer to build custom rack systems. “What we’ve tried to do is make sure customers can take all of it — or mix and match,” he said.
While the company has not yet disclosed pricing details, analysts say Qualcomm’s competitive entry could pressure Nvidia’s margins and accelerate innovation across the AI semiconductor industry. As enterprises race to build new data centers, Qualcomm’s arrival in the space could reshape how AI workloads are powered and optimized in the years ahead.

