Connect with us

Published

on

Broadcom-OpenAI deal expected to be cheaper than current GPU options

Sam Altman didn’t set out to compete with Nvidia.

OpenAI began with a simple bet that better ideas, not better infrastructure, would unlock artificial general intelligence. But that view shifted years ago, as Altman realized that more compute, or processing power, meant more capability — and ultimately, more dominance.

On Monday morning, he unveiled his latest blockbuster deal, one that moves OpenAI squarely into the chipmaking business and further into competition with the hyperscalers.

OpenAI is partnering with Broadcom to co-develop racks of custom AI accelerators, purpose-built for its own models. It’s a big shift for a company that once believed intelligence would come from smarter algorithms, not bigger machines.

“In 2017, the thing that we found was that we were getting the best results out of scale,” the OpenAI CEO said in a company podcast on Monday. “It wasn’t something we set out to prove. It was something we really discovered empirically because of everything else that didn’t work nearly as well.”

That insight — that the key was scale, not cleverness — fundamentally reshaped OpenAI.

Now, the company is expanding that logic even further, teaming up with Broadcom to design and deploy racks of custom silicon optimized for OpenAI’s workloads.

The deal gives OpenAI deeper control over its stack, from training frontier models to owning the infrastructure, distribution, and developer ecosystem that turns those models into lasting platforms.

Altman’s rapid series of deals and product launches is assembling a complete AI ecosystem, much like Apple did for smartphones and Microsoft did for PCs, with infrastructure, hardware, and developers at its core.

OpenAI expands hyperscaler ambitions with custom silicon, 10 GW Broadcom chip deal

Hardware

Through its partnership with Broadcom, OpenAI is co-developing custom AI accelerators, optimized for inference and tailored specifically to its own models.

Unlike Nvidia and AMD chips, which are designed for broader commercial use, the new silicon is built for vertically integrated systems, tightly coupling compute, memory, and networking into full rack-level infrastructure. OpenAI plans to begin deploying them in late 2026.

The Broadcom deal is similar to what Apple did with its M-series chips: control the semiconductors, control the experience.

But OpenAI is going even further and engineering every layer of the hardware stack, not just the chip.

The Broadcom systems are built on its Ethernet stack and designed to accelerate OpenAI’s core workloads, giving the company a physical advantage that’s deeply entangled with its software edge.

At the same time, OpenAI is pushing into consumer hardware, a rare move for a model-first company.

Its $6.4 billion all-stock acquisition of Jony Ive‘s startup, io, brought the legendary Apple designer into its inner circle. It was a sign that OpenAI doesn’t just want to power AI experiences, it wants to own them.

Ive and his team are exploring a new class of AI-native devices designed to reshape how people interact with intelligence, moving beyond screens and keyboards toward more intuitive, engaging experiences.

Reports of early concepts include a screenless, wearable device that uses voice input and subtle haptics, envisioned more as an ambient companion than a traditional gadget.

OpenAI’s twin bet on custom silicon and emotionally resonant consumer hardware adds two more powerful branches over which it has direct control.

Anthropic, OpenAI rivalry goes global

Blockbuster deals

OpenAI’s chips, datacenters and power fold into one coordinated campaign called Stargate that provides the physical backbone of AI.

In the past three weeks, that campaign has gone into overdrive with several major deals:

Taken together, it is OpenAI’s push to root the future of AI in infrastructure it can call its own.

“We are able to think from etching the transistors all the way up to the token that comes out when you ask ChatGPT a question, and design the whole system,” Altman said. “We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models — all of that.”

Whether or not OpenAI can deliver on every promise, the scale and speed of Stargate is already reshaping the market, adding hundreds of billions in market cap for its partners, and establishing OpenAI as the de facto market leader in AI infrastructure.

None of its rivals appears able to match the pace or ambition. And that perception alone is proving a powerful advantage.

Developers

OpenAI and AMD unveil 6GW partnership: Here's what to know

Until now, most companies treated OpenAI as a tool in their stack. But with new features for publishing, monetizing, and deploying apps directly inside ChatGPT, OpenAI is pushing for tighter integration — and making it harder for developers to walk away.

Microsoft CEO Satya Nadella pursued a similar strategy after taking over from Steve Ballmer.

To build trust with developers, Nadella leaned into open source and acquired GitHub for $7.5 billion, a move that signaled Microsoft’s return to the developer community.

GitHub later became the launchpad for tools like Copilot, anchoring Microsoft back at the center of the modern developer stack.

OpenAI and all the big hyperscalers are going for vertical integration,” said Ben van Roo, CEO of Legion, a startup building secure agent frameworks for defense and intelligence use cases.

“Use our models and our compute, and build the next-gen agents and workflows with our tools. The market is massive. We’re talking about replaying SaaS, big systems of record, and literally part of the labor force,” said van Roo.

SaaS stands for software as a service, a group of companies specializing in enterprise software and services, of which Salesforce, Oracle and Adobe are part.

Legion’s strategy is to stay model-agnostic and focus on secure, interoperable agentic workflows that span multiple systems. The company is already deploying inside classified Department of Defense environments and embedding across platforms like NetSuite and Salesforce.

But that same shift also introduces risk for the model makers.

Agents and workflows make some of the massive LLMs both powerful and maybe less necessary,” he noted. “You can build reasoning agents with smaller and specific workflows without GPT-5.”

The tools and agents built with leading LLMs have the potential to replace legacy software products from companies like Microsoft and Salesforce.

That’s why OpenAI is racing to build the infrastructure around its models. It’s not just to make them more powerful, but harder to replace.

The real bet isn’t that the best model will win, but that the company with the most complete developer loop will define the next platform era.

And that’s the vision for ChatGPT now: Not just a chatbot, but an operating system for AI.

OpenAI and Broadcom sign 10GW deal

Continue Reading

Environment

Elon Musk’s brother unloads $25 million in Tesla (TSLA) stock as price surges past $450

Published

on

By

Elon Musk's brother unloads  million in Tesla (TSLA) stock as price surges past 0

Tesla board member and Elon Musk’s brother, Kimbal Musk, is back to selling Tesla (TSLA) stocks. According to a new SEC filing, Kimbal has cashed out over $25 million worth of shares and donated a few more as the stock rides high in late 2025.

We often report on insider selling at Tesla, and Kimbal is one of the more active sellers on the board. He frequently exercises options and sells shares.

According to a Form 4 filing with the SEC released yesterday, Kimbal sold 56,820 shares of Tesla common stock on December 9.

The shares were sold at a weighted average price of $450.66, with individual transactions ranging from $450.44 to $450.90.

Advertisement – scroll for more content

That adds up to a total cash-out of approximately $25.6 million.

But that wasn’t the only movement. The filing also reveals that Kimbal gifted 15,242 shares to a “donor-advised fund”. At the execution price of the sold shares, that donation is worth roughly $6.8 million.

Following these transactions, Kimbal still holds a significant stake in the company. The filing indicates he retains 1,376,373 shares of Tesla directly.

Electrek’s Take

For those who are not aware, Kimball is notorious for calling the top on Tesla’s stock.

Tesla’s stock is currently trading at a price-to-earnings ratio of over 300. That’s unsustainable.

In short, owning Tesla’s stock right now is a bet that Tesla can ~6-10x earnings in the next year or two, while the current earnings trend is a rapid decline.

If you don’t think Tesla can do that, then it might make sense to own it. I doubt Kimball believes that this is the case.

The donation to the donor-advised fund is also standard practice for him. It allows him to take the tax deduction for the charitable contribution immediately while distributing the funds to specific charities over time.

Many billionaires have been known to do that, often transferring the shares to “charities” under their control.

FTC: We use income earning auto affiliate links. More.

Continue Reading

Environment

The Ford Bronco EV is real, but don’t get too excited

Published

on

By

The Ford Bronco EV is real, but don't get too excited

The electric Ford Bronco is rolling off the production line, but not in the US, as you would expect. This one is made in China.

Ford Bronco EV production kicks off in China

China gets another cool new electric vehicle that the US will miss out on. The electric Bronco is now rolling off the production line at Ford’s Nanchang, China, manufacturing plant.

On December 12, Ford announced the Bronco EV, or what it calls the “All-Terrain Camping SUV,” has entered mass production. The SUV rolled off the assembly line as the 200,00th vehicle built at the facility.

The plant is part of Ford’s joint venture with Jiangling Motors Group (JMC) and currently produces other Ford, Lincoln, and JMC vehicles.

Advertisement – scroll for more content

Earlier this year, the JV invested RMB 300 million ($42.5 million) in upgrades to produce new energy vehicles (NEVs), starting with the electric Bronco.

The electric SUV looks nearly identical to the one sold in the US, but it draws power from a 105.4 kWh battery supplied by BYD’s FinDreams, delivering a CLTC driving range of 650 km (404 miles).

Ford-Bronco-EV
Ford begins mass production of the electric Bronco in China (Source: JMC Ford)

It’s equipped with a dual-motor (AWD) powertrain, packing a combined 445 horsepower (332 kW). The EREV version uses a 43.7 kWh battery and a 1.5T engine, good for 220 km (137 miles) all-electric range. Combined, it delivers a driving range of 1,220 km (758 miles).

The interior is custom-tailored for Chinese buyers with modern tech and features. It even includes a built-in 7.5L refrigerator.

A 15.6″ infotainment sits at the center with a smaller driver cluster. Ford also offers an optional 70″ AR head-up display (HUD).

The Bronco EV is 5,025 mm long, 1,960 mm wide, and 1,825 mm tall, with a wheelbase of 2,950 mm, which is about the same size as the standard version sold in the US.

Ford opened orders for the Bronco EV last month with pre-sale prices starting at RMB 229,800 ($32,300). Although it is available with a fully electric (EV) powertrain, it’s also offered as an extended-range electric vehicle (EREV).

The electric Bronco is available in China in three variants, priced from RMB 229,800 ($32,300) to RMB 282,800 ($40,000).

While Ford is planning to build a plug-in hybrid (PHEV) Bronco at its Valencia assembly plant in Spain for Europe, the American automaker still has no plans to launch a fully electric version in the US. We’ll keep wishing.

FTC: We use income earning auto affiliate links. More.

Continue Reading

Environment

Red-hot Texas is getting so many data center requests that experts see a bubble

Published

on

By

Red-hot Texas is getting so many data center requests that experts see a bubble

DealBook Summit 2025: Anthropic CEO on AI spending, AI bubble risk

Everything is bigger in Texas. That’s also true for data center demand in the Lone Star State, where project developers are rushing to cash in on the artificial intelligence boom.

Cheap land and cheap energy are combining to attract a flood of data center developers to the state. The potential demand is so vast that it will be impossible to meet by the end of the decade, energy experts say.

Speculative projects are clogging up the pipeline to connect to the electric grid, making it difficult to see how much demand will actually materialize, they say. But investors will be left on the hook if inflated demand forecasts lead to more infrastructure being built than is actually needed.

“It definitely looks, smells, feels — is acting like a bubble,” said Joshua Rhodes, a research scientist at the University of Texas at Austin and a founder of energy consulting firm IdeaSmiths.

“The top line numbers are almost laughable,” Rhodes said.

More than 220 gigawatts of big projects have asked to connect to the Texas electric grid by 2030, according to December data from the Electric Reliability Council of Texas. More than 70% of those projects are data centers, according to ERCOT, which manages the Texas power grid.

That’s more than twice the Lone Star State’s record peak summer demand this year of around 85 gigawatts, and its total available power generation for the season of around 103 gigawatts. Those figures are “crazy big,” said Beth Garza, a former ERCOT watchdog.

“There’s not enough stuff to serve that much load on the equipment side or the consumption side,” said Garza, director of ERCOT’s independent market monitor from 2014 to 2019.

Rhodes agreed. “There’s just no way we can physically put this much steel in the ground to match those numbers. I don’t even know if China could do it that fast,” he said.

‘Not all real’

Data center requests have exploded in Texas since state legislation in 2023 required projects that have not signed electric connection agreements to be considered in power demand forecasts.

The number of big projects requesting an electric connection has nearly quadrupled this year. But more than half of them, representing about 128 gigawatts of increased potential demand, have not submitted studies for ERCOT to review yet. About another 90 gigawatts are either under review or have had planning studies approved.

“We know it’s not all real. The question is how much is real,” said Michael Hogan, a senior advisor at the Regulatory Assistance Project, which advises governments and regulators on energy policy.

The huge numbers in Texas reflect a broader data center bubble in the U.S., said Hogan, who has worked in the electric industry for more than four decades, starting at General Electric in 1980.

“As with everything else in Texas, it’s an outsized example of it,” he said.

The number of projects that have actually connected to the grid or have been approved by ERCOT is much smaller, at only around 7.5 gigawatts. It is still a large number, equivalent to nearly eight large nuclear plants. But Texas can meet that level of demand, Rhodes said.

“We could comfortably grow 8 gigawatts of data centers,” Rhodes said. Texas might be able to meet 20 gigawatts or 30 gigawatts of data center demand by 2030, he said.

Texas has acted to separate serious data center projects from those that are merely speculative. A law passed in May requires developers to pay $100,000 for the initial study of their project and show that a site is secured through an ownership interest or lease. And they have to disclose whether they have outlined the same project anywhere else in Texas.

The Texas Public Utility Commission has proposed a rule that would require data centers to pay $50,000 security per megawatt of peak power. The cost to a developer would total at least $50 million for a gigawatt-scale data center.

“The serious developers with long-term contracts signed with anchor tenants, they’re going to be willing to put that money down,” Rhodes said. More speculative developers will likely drop out of the line for an electric connection, which will help authorities get a more accurate forecast, he said.

Risk to investors

The risk is that electric infrastructure such as power plants, transmission lines and transformers will be built for speculative data centers that either do not materialize or use less electricity than anticipated, Rhodes said. And overbuilding would come at time when the cost of that infrastructure has soared as data centers and other industries all compete for the same scarce equipment, he said.

“When the bubble bursts, who pays is going to depend on how much steel has been moved,” Rhodes said. The cost of a natural gas plant, for example, has more than doubled over the past five years, he said.

“It’s kind of like buying your house at the top of the market,” the analyst said. “If the house price goes down in five years, you’re out of luck.”

Will AI trigger winter blackouts? NERC CEO Jim Robb on the soaring data center power demand

The cost of building new power plants to serve the Texas electric market is generally borne by investors, Rhodes and Hogan said, providing some protection to households from higher electricity prices if too much capacity is built.

By contrast, electric prices have spiked in some Midwestern and mid-Atlantic states from data center demand because the grid operator, PJM Interconnection, buys power generation years in advance — with the burden falling on consumers.

In Illinois, where the northern part of the state is served by PJM, residential electricity prices rose about 20% in September compared to the same month last year. But prices in Texas increased just 5% year over year, below the average national increase of more than 7%, according to data from the Energy Information Administration.

Texas has less risk of building too much generation compared to PJM states because of the way the market is structured, Hogan said. But “whatever [new] build we do end up seeing in Texas, the people who ended up investing in the excess capacity are the ones that are going to suffer,” he said.

Continue Reading

Trending