Connect with us

Published

on

OpenAI is disbanding its “AGI Readiness” team, which advised the company on OpenAI’s own capacity to handle increasingly powerful AI and the world’s readiness to manage that technology, according to the head of the team.

On Wednesday, Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company via a Substack post. He wrote that his primary reasons were that the opportunity cost had become too high and he thought his research would be more impactful externally, that he wanted to be less biased and that he had accomplished what he set out to at OpenAI.

Brundage also wrote that, as far as how OpenAI and the world is doing on AGI readiness, “Neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” Brundage plans to start his own nonprofit, or join an existing one, to focus on AI policy research and advocacy. He added that “AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so.”

Former AGI Readiness team members will be reassigned to other teams, according to the post.

“We fully support Miles’ decision to pursue his policy research outside industry and are deeply grateful for his contributions,” an OpenAI spokesperson told CNBC. “His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”

In May, OpenAI decided to disband its Superalignment team, which focused on the long-term risks of AI, just one year after it announced the group, a person familiar with the situation confirmed to CNBC at the time.

News of the AGI Readiness team’s disbandment follows the OpenAI board’s potential plans to restructure the firm to a for-profit business, and after three executives — CTO Mira Murati, research chief Bob McGrew and research VP Barret Zoph — announced their departure on the same day last month.

Earlier in October, OpenAI closed its buzzy funding round at a valuation of $157 billion, including the $6.6 billion the company raised from an extensive roster of investment firms and big tech companies. It also received a $4 billion revolving line of credit, bringing its total liquidity to more than $10 billion. The company expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC confirmed with a source familiar last month.

And in September, OpenAI announced that its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, would become an independent board oversight committee. It recently wrapped up its 90-day review evaluating OpenAI’s processes and safeguards and then made recommendations to the board, with the findings also released in a public blog post.

News of the executive departures and board changes also follows a summer of mounting safety concerns and controversies surrounding OpenAI, which along with GoogleMicrosoftMeta and other companies is at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.

In July, OpenAI reassigned Aleksander Madry, one of OpenAI’s top safety executives, to a job focused on AI reasoning instead, sources familiar with the situation confirmed to CNBC at the time.

Madry was OpenAI’s head of preparedness, a team that was “tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models,” according to a bio for Madry on a Princeton University AI initiative website. Madry will still work on core AI safety work in his new role, OpenAI told CNBC at the time.

The decision to reassign Madry came around the same time that Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.”

The letter, which was viewed by CNBC, also stated, “We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.”

Microsoft gave up its observer seat on OpenAI’s board in July, stating in a letter viewed by CNBC that it can now step aside because it’s satisfied with the construction of the startup’s board, which had been revamped since the uprising that led to the brief ouster of Altman and threatened Microsoft’s massive investment in the company.

But in June, a group of current and former OpenAI employees published an open letter describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote at the time.

Days after the letter was published, a source familiar to the mater confirmed to CNBC that the Federal Trade Commission and the Department of Justice were set to open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies’ conduct.

FTC Chair Lina Khan has described her agency’s action as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”

The current and former employees wrote in the June letter that AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

OpenAI’s Superalignment team, announced last year and disbanded in May, had focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the startup in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Altman said at the time on X he was sad to see Leike leave and that OpenAI had more work to do. Soon afterward, co-founder Greg Brockman posted a statement attributed to Brockman and the CEO on X, asserting the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X at the time. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote at the time. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote on X. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Continue Reading

Technology

Palantir is soaring while its tech peers are sinking. Here’s why

Published

on

By

Palantir is soaring while its tech peers are sinking. Here's why

Alex Karp, chief executive officer of Palantir Technologies Inc., speaks during the AIPCon conference in Palo Alto, California, US, on March 13, 2025.

David Paul Morris | Bloomberg | Getty Images

Tech stocks have struggled in 2025, as recession and trade war fears sap investor appetite for riskier assets.

Palantir is the exception.

Against a volatile market backdrop, the software maker’s stock has gained 45% and is the best performer among companies valued at $5 billion or more, according to FactSet. The closest tech names are VeriSign, up 33%, Okta, up 30%, Robinhood, up 29%, and Uber, up 29%.

President Donald Trump‘s frenzy of government department overhauls is partially to thank for the pop.

“When you think about macroeconomic concerns, you as a company need to be more efficient, and this is where Palantir thrives,” said Bank of America analyst Mariana Pérez Mora.

Palantir has set itself apart in the software world for its artificial-intelligence-enabled tools, gaining recognition for its defense and software contracts with key U.S. government agencies, including the military. In the fourth quarter, its government revenues jumped 45% year-over-year to $343 million.

Read more CNBC tech news

Companies have faced immense volatility in 2025 as tariffs threaten to jeopardize global supply chains and halt day-to-day manufacturing operations by hiking costs. Those fears have brought the broad market index down about 7% this year, while the tech-heavy Nasdaq Composite has slumped 11%.

Tech’s megacap companies — Apple, Microsoft, Nvidia, Amazon, Alphabet, Meta and Tesla — are all down between 7% and 31% so far this year.

At the same time, the Trump administration has clamped down on government spending, giving Tesla CEO Elon Musk‘s Department of Government Efficiency freedom to slash public sector costs. Some administration officials have touted shifting dollars from consulting contracts to commercial software providers like Palantir, said William Blair analyst Louie DiPalma.

“Palantir’s business model is highly aligned with the priorities of the Trump administration in terms of increasing agility and being very quick to market,” he said.

That’s put Palantir in the league with major contractors such as Lockheed Martin and Northrop Grumman, which have outperformed in this year’s downdraft. Many companies in the space are also looking to partner with the firm and tend to flock to defense during recessionary times, DiPalma said.

Stock Chart IconStock chart icon

hide content

Palantir vs. the Nasdaq Composite

CEO Alex Karp has also been a vocal supporter of American innovation and the company’s central role in helping prop up what he called the “single best tech scene in the world” during an interview with CNBC earlier this year. Karp also told CNBC that the U.S. needs an “all-country effort” to compete against emerging adversaries.

But the ride for Palantir has been far from smooth, and shares have been susceptible to volatile swings. Shares sold off nearly 14% during the week that Trump first announced tariffs. Shares rocketed 22% one day in February on strong earnings.

Its inclusion in more passive and quant funds over the years and the growing attention of retail traders has added to that turbulence, DiPalma said. Last year, the company joined both the S&P and Nasdaq. Palantir trades at one of the highest price-to-earnings multiples in software and last traded at 185 times earnings over the next twelve months. That puts a steep bar on the stock.

“There really is no margin for error,” he said.

WATCH: Palantir CEO on Elon Musk & DOGE: Biggest problem in society is the ‘legitimacy of our institutions’

Palantir CEO on Elon Musk & DOGE: Biggest problem in society is the 'legitimacy of our institutions'

Continue Reading

Technology

NXP Semi shares sink on tariff concerns, CEO Kurt Sievers to step down

Published

on

By

NXP Semi shares sink on tariff concerns, CEO Kurt Sievers to step down

Kurt Sievers, chief executive officer of NXP Semiconductors NV, during the Federation of German Industries (BDI) conference in Berlin, Germany, on Monday, June 19, 2023.

Liesa Johannssen-Koppitz | Bloomberg | Getty Images

NXP Semiconductor Inc. fell about 8% on Monday after the chip company announced that CEO Kurt Sievers will step down as part of its latest earnings.

Here’s how the company did, versus LSEG consensus estimates:

  • Earnings per share: $2.64 adjusted vs. $2.58 expected
  • Revenue: $2.84 billion vs. $2.83 billion expected

Sievers will retire at the end of the year, with Rafael Sotomayor stepping in as president on April 28, 2025.

The company beat expectations on the top and bottom lines but cited a “challenging set of market conditions” looking forward.

“We are operating in a very uncertain environment influenced by tariffs with volatile direct and indirect effects,” Sievers said in an earnings release.

Sales in NXP’s first quarter declined 9% year over year.

The company posted $1.67 billion in auto sales during the first quarter, trailing analyst estimates of $1.69 billion.

Read more CNBC tech news

NXP Semi said that second-quarter sales would come in at a midpoint of $2.9 billion, ahead of the $2.87 billion that analysts were projecting. Second-quarter adjusted EPS will be $2.66, in line with analyst estimates.

The company logged first-quarter net income of $490 million, which was a 23% year-to-year drop from $639 million.

NXP’s net income per share was $1.92 compared to $2.47 during the same time a year ago. A drop of 22%.

This is breaking news. Please refresh for updates.

WATCH: Uncertainty from Big Tech is fine right now.

Uncertainty from Big Tech earnings is fine right now, says Big Tech's Alex Kantrowitz

Continue Reading

Technology

Microsoft says U.S. can’t afford falling behind China in quantum computers

Published

on

By

Microsoft says U.S. can't afford falling behind China in quantum computers

Microsoft President Brad Smith speaks during signing ceremony of cooperation agreement between the Polish Ministry of Defence and Microsoft, in Warsaw, Poland, February 17, 2025.

Kacper Pempel | Reuters

The U.S. cannot afford to fall behind China in the race to a working quantum computer, Microsoft President Brad Smith wrote Monday.

President Donald Trump and the U.S. government need to prioritize funding for quantum research, or China could surpass the U.S., endangering economic competitiveness and security, Smith wrote.

“While most believe that the United States still holds the lead position, we cannot afford to rule out the possibility of a strategic surprise or that China may already be at parity with the United States,” Smith wrote. “Simply put, the United States cannot afford to fall behind, or worse, lose the race entirely.”

Microsoft’s position is the latest sign that research into quantum computing is starting to heat up among big tech companies and investors who are looking for the next technology that could rival the artificial intelligence boom.

Smith is calling for the Trump administration to increase funding for quantum research, renew the National Quantum Initiative Act and expand a program for testing quantum computers by the Defense Advanced Research Projects Agency, or DARPA. The Microsoft executive is also calling on the White House to expand the educational pipeline of people who have the math and science skills to work on quantum machines, fast-track immigration for Ph.D.s with quantum skills and for the government to buy more quantum-related computer parts to build a U.S. supply chain.

Microsoft did not detail how China surpassing the U.S. in quantum computing technology would endanger national security, but a National Security Agency official last year discussed what could happen if China or another adversary surprised the U.S. by building a quantum computer first.

The official, NSA Director of Research Gil Herrera, said that if such a “black swan” event happened, banks might not be able to keep transactions private because a quantum computer could crack their encryption, according to the Washington Times. A working quantum computer could also crack existing encrypted data that is usually shared publicly in a scrambled fashion, which could reveal secrets on U.S. nuclear weapon systems.

In February, Microsoft announced its latest quantum chip called Majorana, claiming that it invented a new kind of matter to develop the prototype device. Last year, Google announced Willow, a new device the company claimed was a “milestone” because it was able to correct errors and solve a math problem in five minutes that would have taken longer than the age of the universe on a traditional computer.

While the computers people are used to use bits that are either 0 or 1 to do calculations, quantum computers use “qubits,” which end up being on or off based on probability. Experts say that quantum computers will eventually be useful for problems with nearly infinite possibilities, such as simulating chemistry, or routing deliveries.

But the current quantum computers are far away from that point, and many computer industry participants say it could take decades for quantum computers to reach their potential.

Microsoft’s chip, Majorana, has eight qubits, but the company says it has a goal of least 1 million qubits for a commercially useful chip. Microsoft needs to build a device with a few hundred qubits before the company starts looking at whether it’s reliable enough for customers.

WATCH: How quantum computing could supercharge Google’s AI ambitions

How quantum computing could supercharge Google's AI ambitions

Continue Reading

Trending