Connect with us

Published

on

OpenAI is disbanding its “AGI Readiness” team, which advised the company on OpenAI’s own capacity to handle increasingly powerful AI and the world’s readiness to manage that technology, according to the head of the team.

On Wednesday, Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company via a Substack post. He wrote that his primary reasons were that the opportunity cost had become too high and he thought his research would be more impactful externally, that he wanted to be less biased and that he had accomplished what he set out to at OpenAI.

Brundage also wrote that, as far as how OpenAI and the world is doing on AGI readiness, “Neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” Brundage plans to start his own nonprofit, or join an existing one, to focus on AI policy research and advocacy. He added that “AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so.”

Former AGI Readiness team members will be reassigned to other teams, according to the post.

“We fully support Miles’ decision to pursue his policy research outside industry and are deeply grateful for his contributions,” an OpenAI spokesperson told CNBC. “His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”

In May, OpenAI decided to disband its Superalignment team, which focused on the long-term risks of AI, just one year after it announced the group, a person familiar with the situation confirmed to CNBC at the time.

News of the AGI Readiness team’s disbandment follows the OpenAI board’s potential plans to restructure the firm to a for-profit business, and after three executives — CTO Mira Murati, research chief Bob McGrew and research VP Barret Zoph — announced their departure on the same day last month.

Earlier in October, OpenAI closed its buzzy funding round at a valuation of $157 billion, including the $6.6 billion the company raised from an extensive roster of investment firms and big tech companies. It also received a $4 billion revolving line of credit, bringing its total liquidity to more than $10 billion. The company expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC confirmed with a source familiar last month.

And in September, OpenAI announced that its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, would become an independent board oversight committee. It recently wrapped up its 90-day review evaluating OpenAI’s processes and safeguards and then made recommendations to the board, with the findings also released in a public blog post.

News of the executive departures and board changes also follows a summer of mounting safety concerns and controversies surrounding OpenAI, which along with GoogleMicrosoftMeta and other companies is at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.

In July, OpenAI reassigned Aleksander Madry, one of OpenAI’s top safety executives, to a job focused on AI reasoning instead, sources familiar with the situation confirmed to CNBC at the time.

Madry was OpenAI’s head of preparedness, a team that was “tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models,” according to a bio for Madry on a Princeton University AI initiative website. Madry will still work on core AI safety work in his new role, OpenAI told CNBC at the time.

The decision to reassign Madry came around the same time that Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.”

The letter, which was viewed by CNBC, also stated, “We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.”

Microsoft gave up its observer seat on OpenAI’s board in July, stating in a letter viewed by CNBC that it can now step aside because it’s satisfied with the construction of the startup’s board, which had been revamped since the uprising that led to the brief ouster of Altman and threatened Microsoft’s massive investment in the company.

But in June, a group of current and former OpenAI employees published an open letter describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote at the time.

Days after the letter was published, a source familiar to the mater confirmed to CNBC that the Federal Trade Commission and the Department of Justice were set to open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies’ conduct.

FTC Chair Lina Khan has described her agency’s action as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”

The current and former employees wrote in the June letter that AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

OpenAI’s Superalignment team, announced last year and disbanded in May, had focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the startup in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Altman said at the time on X he was sad to see Leike leave and that OpenAI had more work to do. Soon afterward, co-founder Greg Brockman posted a statement attributed to Brockman and the CEO on X, asserting the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X at the time. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote at the time. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote on X. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Continue Reading

Technology

Inside a Utah desert facility preparing humans for life on Mars

Published

on

By

Inside a Utah desert facility preparing humans for life on Mars

Hidden among the majestic canyons of the Utah desert, about 7 miles from the nearest town, is a small research facility meant to prepare humans for life on Mars.

The Mars Society, a nonprofit organization that runs the Mars Desert Research Station, or MDRS, invited CNBC to shadow one of its analog crews on a recent mission.

MDRS is the best analog astronaut environment,” said Urban Koi, who served as health and safety officer for Crew 315. “The terrain is extremely similar to the Mars terrain and the protocols, research, science and engineering that occurs here is very similar to what we would do if we were to travel to Mars.”

SpaceX CEO and Mars advocate Elon Musk has said his company can get humans to Mars as early as 2029.

The 5-person Crew 315 spent two weeks living at the research station following the same procedures that they would on Mars.

David Laude, who served as the crew’s commander, described a typical day.

“So we all gather around by 7 a.m. around a common table in the upper deck and we have breakfast,” he said. “Around 8:00 we have our first meeting of the day where we plan out the day. And then in the morning, we usually have an EVA of two or three people and usually another one in the afternoon.”

An EVA refers to extravehicular activity. In NASA speak, EVAs refer to spacewalks, when astronauts leave the pressurized space station and must wear spacesuits to survive in space.

“I think the most challenging thing about these analog missions is just getting into a rhythm. … Although here the risk is lower, on Mars performing those daily tasks are what keeps us alive,” said Michael Andrews, the engineer for Crew 315.

Watch the video to find out more.

Continue Reading

Technology

Apple scores big victory with ‘F1,’ but AI is still a major problem in Cupertino

Published

on

By

Apple scores big victory with 'F1,' but AI is still a major problem in Cupertino

Formula One F1 – United States Grand Prix – Circuit of the Americas, Austin, Texas, U.S. – October 23, 2022 Tim Cook waves the chequered flag to the race winner Red Bull’s Max Verstappen 

Mike Segar | Reuters

Apple had two major launches last month. They couldn’t have been more different.

First, Apple revealed some of the artificial intelligence advancements it had been working on in the past year when it released developer versions of its operating systems to muted applause at its annual developer’s conference, WWDC. Then, at the end of the month, Apple hit the red carpet as its first true blockbuster movie, “F1,” debuted to over $155 million — and glowing reviews — in its first weekend.

While “F1” was a victory lap for Apple, highlighting the strength of its long-term outlook, the growth of its services business and its ability to tap into culture, Wall Street’s reaction to the company’s AI announcements at WWDC suggest there’s some trouble underneath the hood.

“F1” showed Apple at its best — in particular, its ability to invest in new, long-term projects. When Apple TV+ launched in 2019, it had only a handful of original shows and one movie, a film festival darling called “Hala” that didn’t even share its box office revenue.

Despite Apple TV+ being written off as a costly side-project, Apple stuck with its plan over the years, expanding its staff and operation in Culver City, California. That allowed the company to build up Hollywood connections, especially for TV shows, and build an entertainment track record. Now, an Apple Original can lead the box office on a summer weekend, the prime season for blockbuster films.

The success of “F1” also highlights Apple’s significant marketing machine and ability to get big-name talent to appear with its leadership. Apple pulled out all the stops to market the movie, including using its Wallet app to send a push notification with a discount for tickets to the film. To promote “F1,” Cook appeared with movie star Brad Pitt at an Apple store in New York and posted a video with actual F1 racer Lewis Hamilton, who was one of the film’s producers.

(L-R) Brad Pitt, Lewis Hamilton, Tim Cook, and Damson Idris attend the World Premiere of “F1: The Movie” in Times Square on June 16, 2025 in New York City.

Jamie Mccarthy | Getty Images Entertainment | Getty Images

Although Apple services chief Eddy Cue said in a recent interview that Apple needs the its film business to be profitable to “continue to do great things,” “F1” isn’t just about the bottom line for the company.

Apple’s Hollywood productions are perhaps the most prominent face of the company’s services business, a profit engine that has been an investor favorite since the iPhone maker started highlighting the division in 2016.

Films will only ever be a small fraction of the services unit, which also includes payments, iCloud subscriptions, magazine bundles, Apple Music, game bundles, warranties, fees related to digital payments and ad sales. Plus, even the biggest box office smashes would be small on Apple’s scale — the company does over $1 billion in sales on average every day.

But movies are the only services component that can get celebrities like Pitt or George Clooney to appear next to an Apple logo — and the success of “F1” means that Apple could do more big popcorn films in the future.

“Nothing breeds success or inspires future investment like a current success,” said Comscore senior media analyst Paul Dergarabedian.

But if “F1” is a sign that Apple’s services business is in full throttle, the company’s AI struggles are a “check engine” light that won’t turn off.

Replacing Siri’s engine

At WWDC last month, Wall Street was eager to hear about the company’s plans for Apple Intelligence, its suite of AI features that it first revealed in 2024. Apple Intelligence, which is a key tenet of the company’s hardware products, had a rollout marred by delays and underwhelming features.

Apple spent most of WWDC going over smaller machine learning features, but did not reveal what investors and consumers increasingly want: A sophisticated Siri that can converse fluidly and get stuff done, like making a restaurant reservation. In the age of OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini, the expectation of AI assistants among consumers is growing beyond “Siri, how’s the weather?”

The company had previewed a significantly improved Siri in the summer of 2024, but earlier this year, those features were delayed to sometime in 2026. At WWDC, Apple didn’t offer any updates about the improved Siri beyond that the company was “continuing its work to deliver” the features in the “coming year.” Some observers reduced their expectations for Apple’s AI after the conference.

“Current expectations for Apple Intelligence to kickstart a super upgrade cycle are too high, in our view,” wrote Jefferies analysts this week.

Siri should be an example of how Apple’s ability to improve products and projects over the long-term makes it tough to compete with.

It beat nearly every other voice assistant to market when it first debuted on iPhones in 2011. Fourteen years later, Siri remains essentially the same one-off, rigid, question-and-answer system that struggles with open-ended questions and dates, even after the invention in recent years of sophisticated voice bots based on generative AI technology that can hold a conversation.

Apple’s strongest rivals, including Android parent Google, have done way more to integrate sophisticated AI assistants into their devices than Apple has. And Google doesn’t have the same reflex against collecting data and cloud processing as privacy-obsessed Apple.

Some analysts have said they believe Apple has a few years before the company’s lack of competitive AI features will start to show up in device sales, given the company’s large installed base and high customer loyalty. But Apple can’t get lapped before it re-enters the race, and its former design guru Jony Ive is now working on new hardware with OpenAI, ramping up the pressure in Cupertino.

“The three-year problem, which is within an investment time frame, is that Android is racing ahead,” Needham senior internet analyst Laura Martin said on CNBC this week.

Apple’s services success with projects like “F1” is an example of what the company can do when it sets clear goals in public and then executes them over extended time-frames.

Its AI strategy could use a similar long-term plan, as customers and investors wonder when Apple will fully embrace the technology that has captivated Silicon Valley.

Wall Street’s anxiety over Apple’s AI struggles was evident this week after Bloomberg reported that Apple was considering replacing Siri’s engine with Anthropic or OpenAI’s technology, as opposed to its own foundation models.

The move, if it were to happen, would contradict one of Apple’s most important strategies in the Cook era: Apple wants to own its core technologies, like the touchscreen, processor, modem and maps software, not buy them from suppliers.

Using external technology would be an admission that Apple Foundation Models aren’t good enough yet for what the company wants to do with Siri.

“They’ve fallen farther and farther behind, and they need to supercharge their generative AI efforts” Martin said. “They can’t do that internally.”

Apple might even pay billions for the use of Anthropic’s AI software, according to the Bloomberg report. If Apple were to pay for AI, it would be a reversal from current services deals, like the search deal with Alphabet where the Cupertino company gets paid $20 billion per year to push iPhone traffic to Google Search.

The company didn’t confirm the report and declined comment, but Wall Street welcomed the report and Apple shares rose.

In the world of AI in Silicon Valley, signing bonuses for the kinds of engineers that can develop new models can range up to $100 million, according to OpenAI CEO Sam Altman.

“I can’t see Apple doing that,” Martin said.

Earlier this week, Meta CEO Mark Zuckerberg sent a memo bragging about hiring 11 AI experts from companies such as OpenAI, Anthropic, and Google’s DeepMind. That came after Zuckerberg hired Scale AI CEO Alexandr Wang to lead a new AI division as part of a $14.3 billion deal.

Meta’s not the only company to spend hundreds of millions on AI celebrities to get them in the building. Google spent big to hire away the founders of Character.AI, Microsoft got its AI leader by striking a deal with Inflection and Amazon hired the executive team of Adept to bulk up its AI roster.

Apple, on the other hand, hasn’t announced any big AI hires in recent years. While Cook rubs shoulders with Pitt, the actual race may be passing Apple by.

WATCH: Jefferies upgrades Apple to ‘Hold’

Jefferies upgrades Apple to 'Hold'

Continue Reading

Technology

Musk backs Sen. Paul’s criticism of Trump’s megabill in first comment since it passed

Published

on

By

Musk backs Sen. Paul's criticism of Trump's megabill in first comment since it passed

Tesla CEO Elon Musk speaks alongside U.S. President Donald Trump to reporters in the Oval Office of the White House on May 30, 2025 in Washington, DC.

Kevin Dietsch | Getty Images

Tesla CEO Elon Musk, who bombarded President Donald Trump‘s signature spending bill for weeks, on Friday made his first comments since the legislation passed.

Musk backed a post on X by Sen. Rand Paul, R-Ky., who said the bill’s budget “explodes the deficit” and continues a pattern of “short-term politicking over long-term sustainability.”

The House of Representatives narrowly passed the One Big Beautiful Bill Act on Thursday, sending it to Trump to sign into law.

Paul and Musk have been vocal opponents of Trump’s tax and spending bill, and repeatedly called out the potential for the spending package to increase the national debt.

On Monday, Musk called it the “DEBT SLAVERY bill.”

The independent Congressional Budget Office has said the bill could add $3.4 trillion to the $36.2 trillion of U.S. debt over the next decade. The White House has labeled the agency as “partisan” and continuously refuted the CBO’s estimates.

Read more CNBC tech news

The bill includes trillions of dollars in tax cuts, increased spending for immigration enforcement and large cuts to funding for Medicaid and other programs.

It also cuts tax credits and support for solar and wind energy and electric vehicles, a particularly sore spot for Musk, who has several companies that benefit from the programs.

“I took away his EV Mandate that forced everyone to buy Electric Cars that nobody else wanted (that he knew for months I was going to do!), and he just went CRAZY!” Trump wrote in a social media post in early June as the pair traded insults and threats.

Shares of Tesla plummeted as the feud intensified, with the company losing $152 billion in market cap on June 5 and putting the company below $1 trillion in value. The stock has largely rebounded since, but is still below where it was trading before the ruckus with Trump.

Stock Chart IconStock chart icon

hide content

Tesla one-month stock chart.

— CNBC’s Kevin Breuninger and Erin Doherty contributed to this article.

Continue Reading

Trending