Connect with us

Published

on

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. 

Jason Redmond | AFP | Getty Images

A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote in the open letter.

OpenAI, Google, Microsoft, Meta and other companies are at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.

The current and former employees wrote AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

The letter also details the current and former employees’ concerns about insufficient whistleblower protections for the AI industry, stating that without effective government oversight, employees are in a relatively unique position to hold companies accountable.

“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company’s board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if internal reporting processes fail.

Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also included Ramana Kumar, who formerly worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three famed computer scientists known for advancing the artificial intelligence field also endorsed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell.

“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson told CNBC, adding that the company has an anonymous integrity hotline, as well as a Safety and Security Committee led by members of the board and OpenAI leaders.

Microsoft declined to comment.

Mounting controversy for OpenAI

Last month, OpenAI backtracked on a controversial decision to make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo, viewed by CNBC, was sent to former employees and shared with current ones.

The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” an OpenAI spokesperson told CNBC at the time.

Tuesday’s open letter also follows OpenAI’s decision last month to disband its team focused on the long-term risks of AI just one year after the Microsoft-backed startup announced the group, a person familiar with the situation confirmed to CNBC at the time.

The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.

The team’s disbandment followed team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announcing their departures from the startup last month. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.

Jack Guez | AFP | Getty Images

CEO Sam Altman said on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.

In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently candid in his communications with the board.”

The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.

Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.

American actress Scarlett Johansson at Cannes Film Festival 2023. Photocall of the film Asteroid City. Cannes (France), May 24th, 2023

Mondadori Portfolio | Mondadori Portfolio | Getty Images

Meanwhile, last month, OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities, the company’s latest effort to expand the use of its popular chatbot. One week after OpenAI debuted the range of audio voices, the company announced it would pull one of the viral chatbot’s voices named “Sky.”

“Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.

Continue Reading

Technology

OpenAI to acquire Neptune, a startup that helps with AI model training

Published

on

By

OpenAI to acquire Neptune, a startup that helps with AI model training

OpenAI CEO Sam Altman attends an event to pitch AI for businesses in Tokyo, Japan February 3, 2025.

Kim Kyung-hoon | Reuters

OpenAI has entered into a definitive agreement to acquire Neptune, a startup that builds monitoring and de-bugging tools that artificial intelligence companies use as they train models.

Neptune and OpenAI have collaborated on a metrics dashboard to help teams that are building foundation models. The companies will work “even more closely together” because of the acquisition, Neptune CEO Piotr Niedźwiedź said in a blog.

The startup will wind down its external services in the coming months, Niedźwiedź said. The terms of the acquisition were not disclosed.

“Neptune has built a fast, precise system that allows researchers to analyze complex training workflows,” OpenAI’s Chief Scientist Jakub Pachocki said in a statement. “We plan to iterate with them to integrate their tools deep into our training stack to expand our visibility into how models learn.”

OpenAI has acquired several companies this year.

It purchased a small interface startup called Software Applications Incorporated for an undisclosed sum in October, product development startup Statsig for $1.1 billion in September and Jony Ive’s AI devices startup io for more than $6 billion in May.

Neptune had raised more than $18 million in funding from investors including Almaz Capital and TDJ Pitango Ventures, according to its website. Neptune’s deal with OpenAI is still subject to customary closing conditions.

“I am truly grateful to our customers, investors, co-founders, and colleagues who have made this journey possible,” Niedźwiedź said. “It was the ride of a lifetime already, yet still I believe this is only the beginning.”

WATCH: Sam Altman hits reset at OpenAI, pausing side bets to defend ChatGPT’s AI lead

Sam Altman hits reset at OpenAI, pausing side bets to defend ChatGPT’s AI lead

Continue Reading

Technology

Micron stops selling memory to consumers as demand spikes from AI chips

Published

on

By

Micron stops selling memory to consumers as demand spikes from AI chips

A person walks by a sign for Micron Technology headquarters in San Jose, California, on June 25, 2025.

Justin Sullivan | Getty Images

Micron said on Wednesday that it plans to stop selling memory to consumers to focus on meeting demand for high-powered artificial intelligence chips.

“The AI-driven growth in the data center has led to a surge in demand for memory and storage,” Sumit Sadana, Micron business chief, said in a statement. “Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments.”

Micron’s announcement is the latest sign that the AI infrastructure boom is creating shortages for inputs like memory as a handful of companies commit to spend hundreds of billions in the next few years to build massive data centers. Memory, which is used by computers to store data for short periods of time, is facing a global shortage.

Micron shares are up about 175% this year, though they slipped 3% on Wednesday to $232.25.

AI chips, like the GPUs made by Nvidia and Advanced Micro Devices, use large amounts of the most advanced memory. For example, the current-generation Nvidia GB200 chip has 192GB of memory per graphics processor. Google’s latest AI chip, the Ironwood TPU, needs 192GB of high-bandwidth memory.

Memory is also used in phones and computers, but with lower specs, and much lower quantities — many laptops only come with 16GB of memory. Micron’s Crucial brand sold memory on sticks that tinkerers could use to build their own PCs or upgrade their laptops. Crucial also sold solid-state hard drives.

Micron competes against SK Hynix and Samsung in the market for high-bandwidth memory, but it’s the only U.S.-based memory supplier. Analysts have said that SK Hynix is Nvidia’s primary memory supplier.

Micron supplies AMD, which says its AI chips use more memory than others, providing them a performance advantage for running AI. AMD’s current AI chip, the MI350, comes with 288GB of high-bandwidth memory.

Micron’s Crucial business was not broken out in company earnings. However, its cloud memory business unit showed 213% year-over-year growth in the most recent quarter.

Analysts at Goldman on Tuesday raised their price target on Micron’s stock to $205 from $180, though they maintained their hold recommendation. The analysts wrote in a note to clients that due to “continued pricing momentum” in memory, they “expect healthy upside to Street estimates” when Micron reports quarterly results in two weeks.

A Micron spokesperson declined to comment on whether the move would result in layoffs.

“Micron intends to reduce impact on team members due to this business decision through redeployment opportunities into existing open positions within the company,” the company said in its release.

WATCH: Winners and losers from surge in prices for memory chips

The winners and losers from the surge in memory chip prices

Continue Reading

Technology

Microsoft stock sinks on report AI product sales are missing growth goals

Published

on

By

Microsoft stock sinks on report AI product sales are missing growth goals

Microsoft: Have not lowered sales quotas or targets for salespeople

Microsoft pushed back on a report Wednesday that the company lowered growth targets for artificial intelligence software sales after many of its salespeople missed those goals in the last fiscal year.

The company’s stock sank more than 2% on The Information report.

A Microsoft spokesperson said the company has not lowered sales quotas or targets for its salespeople.

The sales lag occurred for Microsoft’s Foundry product, an Azure enterprise platform where companies can build and manage AI agents, according to The Information, which cited two salespeople in Azure’s cloud unit.

AI agents can carry out a series of actions for a user or organization autonomously.

Less than a fifth of salespeople in one U.S. Azure unit met the Foundry sales growth target of 50%, according to The Information.

In another unit, the quota was set to double Foundry sales, The Information reported. The quota was dropped to 50% after most salespeople didn’t meet it.

In a statement, the company said the news outlet inaccurately combined the concepts of growth and quotas.

Read more CNBC tech news

“Aggregate sales quotas for AI products have not been lowered, as we informed them prior to publication,” a Microsoft Spokesperson said.

The AI boom has presented opportunities for businesses to add efficiencies and streamline tasks, with the companies that build these agents touting the power of the tools to take on work and allow workers to do more.

OpenAI, Google, Anthropic, Salesforce, Amazon and others all have their own tools to create and manage these AI assistants.

But the adoption of these tools by traditional businesses hasn’t seen the same surge as other parts of the AI ecosystem.

The Information noted AI adoption struggles at private equity firm Carlyle last year, in which the tools wouldn’t reliably connect data from other places. The company later reduced how much it spent on the tools.

Read the full story from The Information here.

Continue Reading

Trending