Connect with us

Published

on

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025.

Kyle Grillot | Bloomberg | Getty Images

Broadcom and OpenAI have made their partnership official.

OpenAI and Broadcom said Monday that they’re jointly building and deploying 10 gigawatts of custom artificial intelligence accelerators as part of a broader effort across the industry to scale AI infrastructure.

Broadcom shares climbed 9% following news of the deal.

The companies didn’t disclose financial terms.

While the companies have been working together for 18 months, they’re now going public with plans to develop and deploy racks of OpenAI-designed chips starting late next year. OpenAI has announced massive deals in recent weeks with Nvidia, Oracle and Advanced Micro Devices, as it tries to secure the capital and compute needs necessary for its historically ambitious AI buildout plans.

“These things have gotten so complex you need the whole thing,” OpenAI CEO Sam Altman said in a podcast with OpenAI and Broadcom executives that the companies released along with the news.

The systems include networking, memory and compute — all customized for OpenAI’s workloads and built on Broadcom’s Ethernet stack. By designing its own chips, OpenAI can bring compute costs down and stretch its infrastructure dollars further. Industry estimates peg the cost of a 1-gigawatt data center at roughly $50 billion, with $35 billion of that typically allocated to chips — based on Nvidia’s current pricing.

OpenAI and AMD unveil 6GW partnership: Here's what to know

The Broadcom deal provides “a gigantic amount of computing infrastructure to serve the needs of the world to use advanced intelligence,” Altman said. “We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models — all of that.”

Broadcom has been one of the biggest beneficiaries of the generative AI boom, as hyperscalers have been snapping up its custom AI chips, which the company calls XPUs. Broadcom doesn’t name its large web-scale customers, but analysts have said dating back to last year that its first three clients were Google, Meta and TikTok parent ByteDance.

Shares of Broadcom are up 40% this year after more than doubling in 2024, and the company’s market cap has surpassed $1.5 trillion.

OpenAI President Greg Brockman said the company used its own models to accelerate chip design and improve efficiency.

“We’ve been able to get massive area reductions,” he said in the podcast. “You take components that humans have already optimized and just pour compute into it, and the model comes out with its own optimizations.”

Broadcom CEO Hock Tan said in the same conversation that OpenAI is the company building “the most-advanced” frontier models.

“You continue to need compute capacity — the best, latest compute capacity — as you progress in a road map towards a better and better frontier model and towards superintelligence,” he said. “If you do your own chips, you control your destiny.”

Hock Tan, CEO of Broadcom.

Martin H. Simon | Bloomberg | Getty Images

Altman indicated that 10 gigawatts is just the beginning.

“Even though it’s vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price — the world will absorb it super fast and just find incredible new things to use it for,” he said.

OpenAI today operates on just over 2 gigawatts of compute capacity.

Altman said that’s been enough to scale ChatGPT to where it is today, as well as develop and launch video creation service Sora and do a lot of AI research. But demand is soaring.

OpenAI has announced roughly 33 gigawatts of compute commitments over the past three weeks across partnerships with Nvidia, Oracle, AMD and Broadcom.

“If we had 30 gigawatts today with today’s quality of models,” he added, “I think you would still saturate that relatively quickly in terms of what people would do.”

WATCH: China opens antitrust probe into Qualcomm’s Autotalks deals

China opens antitrust probe into Qualcomm's Autotalks deals

Continue Reading

Technology

OpenAI temporarily blocked from using ‘Cameo’ after trademark lawsuit

Published

on

By

OpenAI temporarily blocked from using 'Cameo' after trademark lawsuit

Dado Ruvic | Reuters

OpenAI will not be allowed use the word “cameo” to name any products or features in its Sora app for a month after a federal judge placed a temporary restraining order for the term on the AI startup.

U.S. District Judge Eumi K. Lee granted a temporary restraining order on Monday, blocking OpenAI from using the “cameo” mark or similar words like “Kameo” or “CameoVideo” for any function related to Sora, the company’s AI-generated video app.

“We disagree with the complaint’s assertion that anyone can claim exclusive ownership over the word ‘cameo’, and we look forward to continuing to make our case to the court,” an OpenAI spokesperson told CNBC.

Lee granted the order after OpenAI was sued in October by Cameo, a platform that allows users to purchase personalized videos from celebrities. Cameo filed a trademark lawsuit against the artificial intelligence company following the launch of Sora’s “Cameo” feature, which allowed users to generate characters of themselves or others and insert them into videos.

“We are gratified by the court’s decision, which recognizes the need to protect consumers from the confusion that OpenAI has created by using the Cameo trademark,” Cameo CEO Steven Galanis said in a statement. “While the court’s order is temporary, we hope that OpenAI will agree to stop using our mark permanently to avoid any further harm to the public or Cameo.”

The order is set to expire on Dec. 22, and a hearing for whether the halt should be made permanent is scheduled for Dec. 19.

Cameo CEO on OpenAI lawsuit: Problem is using our name, not Sora AI

Continue Reading

Technology

OpenAI announces shopping research tool in latest e-commerce push

Published

on

By

OpenAI announces shopping research tool in latest e-commerce push

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025.

Kyle Grillot | Bloomberg | Getty Images

OpenAI announced a new tool called “shopping research” on Monday, right as consumers will be ramping up spending ahead of the holiday season.

The startup said the tool is designed for ChatGPT users who are looking for detailed, well-researched shopping guides. The guides include top products, key differences between the products and the latest information from retailers, according to a blog.

Users will be able to tailor their guides based on their budget, what features they care about and who they are shopping for. OpenAI said it will take a couple of minutes to generate answers with shopping research, so users who are looking for simple answers like a price check can still rely on a regular ChatGPT response.

When users submit prompts to ChatGPT that say things like, “Find the quietest cordless stick vacuum for a small apartment,” or “I need a gift for my four year old niece who loves art,” they will see the shopping research tool pop up automatically, OpenAI said. The tool can also be accessed from the menu.

OpenAI has been pushing deeper into e-commerce in recent months. The company introduced a feature called Instant Checkout in September that allows users to make purchases directly from eligible merchants through ChatGPT.

Shopping research users will be able to make purchases with Instant Checkout in the future, OpenAI said on Monday.

OpenAI said its shopping research results are organic and based on publicly available retail websites, and that it will not share users’ chats with retailers. It’s possible that shopping research will make mistakes around product availability and pricing, the company said.

Shopping research is rolling out to OpenAI’s Free, Go, Plus and Pro users who are logged in to ChatGPT.

WATCH: OpenAI taps Foxconn to build AI hardware in the U.S.

OpenAI taps Foxconn to build AI hardware in the U.S.

Continue Reading

Technology

Tesla fans told by Dutch safety regulator to stop pressuring agency on ‘FSD Supervised’

Published

on

By

Tesla fans told by Dutch safety regulator to stop pressuring agency on 'FSD Supervised'

A Tesla logo outside the company’s Tilburg Factory and Delivery Center.

Karol Serewis | Getty Images

Tesla is trying to get its “FSD Supervised” technology approved for use in the Netherlands. But Dutch regulators are telling Tesla fans to stop pressuring safety authority RDW on the matter, and that their efforts will have “no influence” on the ultimate decision.

The RDW issued a statement on Monday directed at those who have been sending messages to try and get the agency to clear Tesla’s premium partially automated driving system, marketed in the U.S. as the Full Self-Driving (Supervised) option. It’s not yet available for use in the Netherlands or Europe broadly.

“We thank everyone who has already done so and would like to ask everyone not to contact us about this,” the agency said. “It takes up unnecessary time for our customer service. Moreover, this will have no influence on whether or not the planning is met. Road safety is the RDW’s top priority: admission is only possible once the safety of the system has been convincingly demonstrated.”

The regulator said it will make a decision only after Elon Musk’s company shows that the technology meets the country’s stringent vehicle safety standards. The RDW has booked a schedule allowing Tesla to demonstrate its systems, and said it could decide on authorization as early as February.

Last week, Tesla posted on X encouraging its followers to contact RDW to express their wishes to have the systems approved.

The post claimed, “RDW has committed to granting Netherlands National approval in February 2026,” adding a message to “please contact them via link below to express your excitement & thank them for making this happen as soon as possible.” Tesla said other EU countries could then follow suit.

The RDW corrected Tesla on Monday, saying in a statement on its official website, that such approval is not guaranteed and had not been promised.

Tesla didn’t immediately respond to a request for comment.

In the U.S., the National Highway Traffic Safety Administration opened an investigation into Tesla’s FSD-equipped vehicles in October following reports of widespread traffic violations tied to use of the systems.

The cars Tesla sells today, even with FSD Supervised engaged, require a human driver ready to brake or steer at any time.

For years, Musk has promised that Tesla customers would soon be able to turn their existing electric vehicles into robotaxis, capable of generating income for owners while they sleep or go on vacation, with a simple software update.

That hasn’t happened yet, and Tesla has since informed owners that future upgrades will require new hardware as well as software releases.

Tesla is testing a Robotaxi-brand ride-hailing service in Texas and elsewhere, but it includes human safety drivers or supervisors on board who either conduct the drives or manually intervene as needed. Musk has said the company aims to remove human driers in Austin, Texas, by the end of 2025.

WATCH: Tesla bear on company’s EV business

Tesla's EV business is not worth even 20% of stock price: Why TSLA bulls 'have it wrong'

Continue Reading

Trending