Connect with us

Published

on

Lisa Su, CEO of Advanced Micro Devices, testifies during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

Advanced Micro Devices on Thursday unveiled new details about its next-generation AI chips, the Instinct MI400 series, that will ship next year.

The MI400 chips will be able to be assembled into a full server rack called Helios, AMD said, which will enable thousands of the chips to be tied together in a way that they can be used as one “rack-scale” system.

“For the first time, we architected every part of the rack as a unified system,” AMD CEO Lisa Su said at a launch event in San Jose, California, on Thursday.

OpenAI CEO Sam Altman appeared on stage on with Su and said his company would use the AMD chips.

“When you first started telling me about the specs, I was like, there’s no way, that just sounds totally crazy,” Altman said. “It’s gonna be an amazing thing.”

AMD’s rack-scale setup will make the chips look to a user like one system, which is important for most artificial intelligence customers like cloud providers and companies that develop large language models. Those customers want “hyperscale” clusters of AI computers that can span entire data centers and use massive amounts of power.

“Think of Helios as really a rack that functions like a single, massive compute engine,” said Su, comparing it against Nvidia’s Vera Rubin racks, which are expected to be released next year.

OpenAI CEO Sam Altman poses during the Artificial Intelligence (AI) Action Summit, at the Grand Palais, in Paris, on February 11, 2025. 

Joel Saget | Afp | Getty Images

AMD’s rack-scale technology also enables its latest chips to compete with Nvidia’s Blackwell chips, which already come in configurations with 72 graphics-processing units stitched together. Nvidia is AMD’s primary and only rival in big data center GPUs for developing and deploying AI applications.

OpenAI — a notable Nvidia customer — has been giving AMD feedback on its MI400 roadmap, the chip company said. With the MI400 chips and this year’s MI355X chips, AMD is planning to compete against rival Nvidia on price, with a company executive telling reporters on Wednesday that the chips will cost less to operate thanks to lower power consumption, and that AMD is undercutting Nvidia with “aggressive” prices.

So far, Nvidia has dominated the market for data center GPUs, partially because it was the first company to develop the kind of software needed for AI developers to take advantage of chips originally designed to display graphics for 3D games. Over the past decade, before the AI boom, AMD focused on competing against Intel in server CPUs.

Su said that AMD’s MI355X can outperform Nvidia’s Blackwell chips, despite Nvidia using its “proprietary” CUDA software.

“It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress,” Su said.

AMD shares are flat so far in 2025, signaling that Wall Street doesn’t yet see it as a major threat to Nvidia’s dominance.

AMD

Courtesy: AMD

Andrew Dieckmann, AMD’s general manger for data center GPUs, said Wednesday that AMD’s AI chips would cost less to operate and less to acquire.

“Across the board, there is a meaningful cost of acquisition delta that we then layer on our performance competitive advantage on top of, so significant double-digit percentage savings,” Dieckmann said.

Over the next few years, big cloud companies and countries alike are poised to spend hundreds of billions of dollars to build new data center clusters around GPUs in order to accelerate the development of cutting-edge AI models. That includes $300 billion this year alone in planned capital expenditures from megacap technology companies.

AMD is expecting the total market for AI chips to exceed $500 billion by 2028, although it hasn’t said how much of that market it can claim — Nvidia has over 90% of the market currently, according to analyst estimates.

Both companies have committed to releasing new AI chips on an annual basis, as opposed to a biannual basis, emphasizing how fierce competition has become and how important bleeding-edge AI chip technology is for companies like Microsoft, Oracle and Amazon.

AMD has bought or invested in 25 AI companies in the past year, Su said, including the purchase of ZT Systems earlier this year, a server maker that developed the technology AMD needed to build its rack-sized systems.

“These AI systems are getting super complicated, and full-stack solutions are really critical,” Su said.

What AMD is selling now

Currently, the most advanced AMD AI chip being installed from cloud providers is its Instinct MI355X, which the company said started shipping in production last month. AMD said that it would be available for rent from cloud providers starting in the third quarter.

Companies building large data center clusters for AI want alternatives to Nvidia, not only to keep costs down and provide flexibility, but also to fill a growing need for “inference,” or the computing power needed for actually deploying a chatbot or generative AI application, which can use much more processing power than traditional server applications.

“What has really changed is the demand for inference has grown significantly,” Su said.

AMD officials said Thursday that they believe their new chips are superior for inference to Nvidia’s. That’s because AMD’s chips are equipped with more high-speed memory, which allows bigger AI models to run on a single GPU.

The MI355X has seven times the amount of computing power as its predecessor, AMD said. Those chips will be able to compete with Nvidia’s B100 and B200 chips, which have been shipping since late last year.

AMD said that its Instinct chips have been adopted by seven of the 10 largest AI customers, including OpenAI, Tesla, xAI, and Cohere.

Oracle plans to offer clusters with over 131,000 MI355X chips to its customers, AMD said.

Officials from Meta said Thursday that they were using clusters of AMD’s CPUs and GPUs to run inference for its Llama model, and that it plans to buy AMD’s next-generation servers.

A Microsoft representative said that it uses AMD chips to serve its Copilot AI features.

Competing on price

AMD declined to say how much its chips cost — it doesn’t sell chips by themselves, and end-users usually buy them through a hardware company like Dell or Super Micro Computer — but the company is planning for the MI400 chips to compete on price.

The Santa Clara company is pairing its GPUs alongside its CPUs and networking chips from its 2022 acquisition of Pensando to build its Helios racks. That means greater adoption of its AI chips should also benefit the rest of AMD’s business. It’s also using an open-source networking technology to closely integrate its rack systems, called UALink, versus Nvidia’s proprietary NVLink.

AMD claims its MI355X can deliver 40% more tokens — a measure of AI output — per dollar than Nvidia’s chips because its chips use less power than its rival’s.

Data center GPUs can cost tens of thousands of dollars per chip, and cloud companies usually buy them in large quantities.

AMD’s AI chip business is still much smaller than Nvidia’s. It said it had $5 billion in AI sales in its fiscal 2024, but JP Morgan analysts are expecting 60% growth in the category this year.

WATCH: AMD CEO Lisa Su: Chip export controls are a headwind but we still see growth opportunity

AMD CEO Lisa Su: Chip export controls are a headwind but we still see growth opportunity

Continue Reading

Technology

Tech founders call on Sequoia Capital to denounce VC Shaun Maguire’s Mamdani comments

Published

on

By

Tech founders call on Sequoia Capital to denounce VC Shaun Maguire's Mamdani comments

Almost 600 people have signed an open letter to leaders at venture firm Sequoia Capital after one of its partners, Shaun Maguire, posted what the group described as a “deliberate, inflammatory attack” against the Muslim Democratic mayoral candidate in New York City.

Maguire, a vocal supporter of President Donald Trump, posted on X over the weekend that Zohran Mamdani, who won the Democratic primary last month, “comes from a culture that lies about everything” and is out to advance “his Islamist agenda.”

The post had 5.3 million views as of Monday afternoon. Maguire, whose investments include Elon Musk’s SpaceX and X as well as artificial intelligence startup Safe Superintelligence, also published a video on X explaining the remark.

Those signing the letter are asking Sequoia to condemn Maguire’s comments and apologize to Mamdani and Muslim founders. They also want the firm to authorize an independent investigation of Maguire’s behavior in the past two years and post “a zero-tolerance policy on hate speech and religious bigotry.”

They are asking the firm for a public response by July 14, or “we will proceed with broader public disclosure, media outreach and mobilizing our networks to ensure accountability,” the letter says.

Sequoia declined to comment. Maguire didn’t respond to a request for comment, but wrote in a post about the letter on Wednesday that, “You can try everything you want to silence me, but it will just embolden me.”

Among the signees are Mudassir Sheikha, CEO of ride-hailing service Careem, and Amr Awadallah, CEO of AI startup Vectara. Also on the list is Abubakar Abid, who works in machine learning Hugging Face, which is backed by Sequoia, and Ahmed Sabbah, CEO of Telda, a financial technology startup that Sequoia first invested in four years ago.

At least three founders of startups that have gone through startup accelerator program Y Combinator added their names to the letter.

Sequoia as a firm is no stranger to politics. Doug Leone, who led the firm until 2022 and remains a partner, is a longtime Republican donor, who supported Trump in the 2024 election. Following Trump’s victory in November, Leone posted on X, “To all Trump voters:  you no longer have to hide in the shadows…..you’re the majority!!”

By contrast, Leone’s predecessor, Mike Moritz, is a Democratic megadonor, who criticized Trump and, in August, slammed his colleagues in the tech industry for lining up behind the Republican nominee. In a Financial Times opinion piece, Moritz wrote Trump’s tech supporters were “making a big mistake.”

“I doubt whether any of them would want him as part of an investment syndicate that they organised,” wrote Moritz, who stepped down from Sequoia in 2023, over a decade after giving up a management role at the firm. “Why then do they dismiss his recent criminal conviction as nothing more than a politically inspired witch-hunt over a simple book-keeping error?”

Neither Leone nor Moritz returned messages seeking comment.

Roelof Botha, Sequoia’s current lead partner, has taken a more neutral stance. Botha said at an event last July that Sequoia as a partnership doesn’t “take a political point of view,” adding that he’s “not a registered member of either party.” Boelof said he’s “proud of the fact that we’ve enabled many of our partners to express their respected individual views along the way, and given them that freedom.”

Maguire has long been open with his political views. He said on X last year that he had “just donated $300k to President Trump.”

Mamdani, a self-described democratic socialist, has gained the ire of many people in tech and in the business community more broadly since defeating former New York Gov. Andrew Cuomo in the June primary.

— CNBC’s Ari Levy contributed to this report.

WATCH: SpaceX valuation is maybe even conservative, says Sequoia’s Shaun Maguire

Continue Reading

Technology

Samsung expects second-quarter profits to more than halve as it struggles to capture AI demand

Published

on

By

Samsung expects second-quarter profits to more than halve as it struggles to capture AI demand

Samsung signage during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Thursday, March 20, 2025.

David Paul Morris | Bloomberg | Getty Images

South Korea’s Samsung Electronics on Tuesday forecast a 56% fall in profits for the second as the company struggles to capture demand from artificial intelligence chip leader Nvidia. 

The memory chip and smartphone maker said in its guidance that operating profit for the quarter ending June was projected to be around 4.6 trillion won, down from 10.44 trillion Korean won year over year.

The figure is a deeper plunge compared to smart estimates from LSEG, which are weighted toward forecasts from analysts who are more consistently accurate.

According to the smart estimates, Samsung was expected to post an operating profit of 6.26 trillion won ($4.57 billion) for the quarter. Meanwhile, Samsung projected its revenue to hit 74 trillion won, falling short of LSEG smart estimates of 75.55 trillion won.

Samsung is a leading player in the global smartphone market and is also one of the world’s largest makers of memory chips, which are utilized in devices such as laptops and servers.

However, the company has been falling behind competitors like SK Hynix and Micron in high-bandwidth memory chips — an advanced type of memory that is being deployed in AI chips.

“The disappointing earnings are due to ongoing operating losses in the foundry business, while the upside in high-margin HBM business remains muted this quarter,” MS Hwang, Research Director at Counterpoint Research, said about the earnings guidance.

SK Hynix, the leader in HBM, has secured a position as Nvidia’s key supplier. While Samsung has reportedly been working to get the latest version of its HBM chips certified by Nvidia, a report from a local outlet suggests these plans have been pushed back to at least September.

The company did not respond to a request for comment on the status of its deals with Nvidia.

Ray Wang, Research Director of Semiconductors, Supply Chain and Emerging Technology at Futurum Group told CNBC that it is clear that Samsung has yet to pass Nvidia’s qualification for its most advanced HBM.

“Given that Nvidia accounts for roughly 70% of global HBM demand, the delay meaningfully caps near-term upside,” Wang said. He noted that while Samsung has secured some HBM supply for AI processors from AMD, this win is unlikely to contribute to second-quarter results due to the timing of production ramps.

Meanwhile, Samsung’s chip foundry business continues to face weak orders and serious competition from Taiwan Semiconductor Manufacturing Company, Wang added.

Reuters reported in September that Samsung had instructed its subsidiaries worldwide to cut 30% of staff in some divisions, citing sources familiar with the matter.

Continue Reading

Technology

Waymo to begin testing in Philadelphia with safety drivers behind the wheel

Published

on

By

Waymo to begin testing in Philadelphia with safety drivers behind the wheel

A Waymo autonomous self-driving Jaguar electric vehicle sits parked at an EVgo charging station in Los Angeles, California, on May 15, 2024.

Patrick T. Fallon | AFP | Getty Images

Waymo said it will begin testing in Philadelphia, with a limited fleet of vehicles and human safety drivers behind the wheel.

“This city is a National Treasure,” Waymo wrote in a post on X on Monday. “It’s a city of love, where eagles fly with a gritty spirit and cheese that spreads and cheese that steaks. Our road trip continues to Philly next.”

The Alphabet-owned company confirmed to CNBC that it will be testing in Pennsylvania’s largest city through the fall, adding that the initial fleet of cars will be manually driven through the more complex parts of Philadelphia, including downtown and on freeways.

“Folks will see our vehicles driving at all hours throughout various neighborhoods, from North Central to Eastwick, and from University City to as far east as the Delaware River,” a Waymo spokesperson said.

With its so-called road trips, Waymo seeks to collect mapping data and evaluate how its autonomous technology, Waymo Driver, performs in new environments, handling traffic patterns and local infrastructure. Road trips are often used a way for the company to gauge whether it can potentially offer a paid ride share service in a particular location.

The expanded testing, which will go through the fall, comes as Waymo aims for a broader rollout. Last month, the company announced plans to drive vehicles manually in New York for testing, marking the first step toward potentially cracking the largest U.S. city. Waymo applied for a permit with the New York City Department of Transportation to operate autonomously with a trained specialist behind the wheel in Manhattan. State law currently doesn’t allow for such driverless operations.

Waymo One provides more than 250,000 paid trips each week across Phoenix, San Francisco, Los Angeles, and Austin, Texas, and is preparing to bring fully autonomous rides to Atlanta, Miami, and Washington, D.C., in 2026.

Alphabet has been under pressure to monetize artificial intelligence products as it bolsters spending on infrastructure. Alphabet’s “Other Bets” segment, which includes Waymo, brought in revenue of $1.65 billion in 2024, up from $1.53 billion in 2023. However, the segment lost $4.44 billion last year, compared to a loss of $4.09 billion the previous year.

WATCH: We went to Texas for Tesla’s robotaxi launch

We went to Texas for Tesla's robotaxi launch. Here's what we saw

Continue Reading

Trending