AMD sees next AI chip in mass production later this year

By Max A. Cherney

SAN FRANCISCO (Reuters) -Advanced Micro Devices said on Thursday it plans to start mass production of a new version of its artificial-intelligence chip called the MI325X in the fourth quarter of the year, as it seeks to bolster its presence in a market dominated by Nvidia.

At an event in San Francisco, AMD CEO Lisa Su said the company plans to release its next-generation MI350 series chips in the second half of 2025. These chips include an increased amount of memory and will boast a new underlying architecture that AMD said will improve performance significantly over the prior MI300X and MI250X chips.

The announcements were broadly expected based on AMD disclosures earlier this year. They failed to cheer investors, who sent AMD shares down nearly 5% in afternoon trading. Some analysts attributed the fall to the absence of large new cloud-computing customers for the chips.

Shares of rival Nvidia were up 1.5% while Intel fell 1.6%.  

Demand for AI processors from major technology firms such as Microsoft and Meta Platforms has been far outpacing supply from Nvidia and AMD, allowing the semiconductor companies to sell as much as they can produce.

That has driven a massive rally in chip stocks over the past two years, with AMD’s shares up about 30% since a recent low in early August. 

“There are no new customers announced so far,” said Summit Insights research analyst Kinngai Chan, adding that the stock had gained ahead of the event in anticipation of “something new.”

Santa Clara, California-based AMD said vendors such as Super Micro Computer would begin to ship its MI325X AI chip to customers in the first quarter of 2025. The AMD design aims to compete with Nvidia’s Blackwell architecture.

The MI325X chip uses the same architecture as the already-available MI300X, which AMD launched last year. The new chip includes a new type of memory that AMD said will speed AI calculations.

AMD’s next-generation AI chips are likely to put further pressure on Intel, which has struggled to deploy a coherent AI chips strategy. Intel has forecast AI chip sales of more than $500 million in 2024.

NEW SERVER, PC CHIPS

AMD’s Su also said at the event that the company does not currently have plans to use contract chip manufacturers beyond Taiwan’s TSMC for advanced manufacturing processes, which are used to produce speedy AI chips.

“We would love to use more capacity outside of Taiwan. We are very aggressive in the use of TSMC’s Arizona facility,” Su said.

AMD also unveiled several networking chips that help speed moving data between chips and systems inside data centers.

The company announced the availability of a new version of its server central processing unit (CPU) design. The family of chips formerly codenamed Turin includes a version of one of them that is designed to keep the graphics processing units (GPUs) fed with data – which will speed AI processing.

The flagship chip boasts nearly 200 processing cores and is priced at $14,813. The whole line of processors uses the Zen 5 architecture that offers speed gains of as much as 37% for advanced AI data crunching.

Beyond the data center chips, AMD announced three new PC chips aimed at laptops, based on the Zen 5 architecture. The new chips are tuned to run AI applications and will be capable of running Microsoft’s Copilot+ software.

In July, AMD raised its AI chip forecast to $4.5 billion for the year from its previous target of $4 billion. Demand for its MI300X chips has surged because of the frenzy around building and deploying generative AI products.

This year analysts expect AMD to report data center revenue of $12.83 billion, according to LSEG estimates. Wall Street expects Nvidia to report data center revenue of $110.36 billion. Data center revenue is a proxy for AI chips needed to build and run AI applications.

Analysts’ rising earnings expectations have kept AMD and Nvidia’s valuations in check despite the share surge. Both the companies trade at more than 33 times their 12-month forward earnings estimates, compared with the benchmark S&P 500’s 22.3.

(Reporting by Max Cherney in San Francisco; additional reporting by Aditya Soni and Arsheeya Bajwa in BengaluruEditing by Sonali Paul, Peter Henderson and Matthew Lewis)