OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.
Jason Redmond | AFP | Getty Images
A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote in the open letter.
OpenAI, Google, Microsoft, Meta and other companies are at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.
The current and former employees wrote AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
The letter also details the current and former employees’ concerns about insufficient whistleblower protections for the AI industry, stating that without effective government oversight, employees are in a relatively unique position to hold companies accountable.
“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company’s board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if internal reporting processes fail.
Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also included Ramana Kumar, who formerly worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three famed computer scientists known for advancing the artificial intelligence field also endorsed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell.
“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson told CNBC, adding that the company has an anonymous integrity hotline, as well as a Safety and Security Committee led by members of the board and OpenAI leaders.
Microsoft declined to comment.
Mounting controversy for OpenAI
Last month, OpenAI backtracked on a controversial decision to make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo, viewed by CNBC, was sent to former employees and shared with current ones.
The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” an OpenAI spokesperson told CNBC at the time.
Tuesday’s open letter also follows OpenAI’s decision last month to disband its team focused on the long-term risks of AI just one year after the Microsoft-backed startup announced the group, a person familiar with the situation confirmed to CNBC at the time.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.
The team’s disbandment followed team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announcing their departures from the startup last month. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.
Jack Guez | AFP | Getty Images
CEO Sam Altman said on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Leike wrote he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”
The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.
In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently candid in his communications with the board.”
The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.
Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.
American actress Scarlett Johansson at Cannes Film Festival 2023. Photocall of the film Asteroid City. Cannes (France), May 24th, 2023
Meanwhile, last month, OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities, the company’s latest effort to expand the use of its popular chatbot. One week after OpenAI debuted the range of audio voices, the company announced it would pull one of the viral chatbot’s voices named “Sky.”
“Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.
U.S. artificial intelligence names were in negative territory in premarket trading on Friday, extending losses into their third day.
Oracle was 0.9% lower in premarket trading, paring earlier losses which saw it fall 1.3%. Nvidia shed 0.7%, Micron fell 0.9%, and CoreWeave was down 1.3% at 5:16 a.m. ET.
The share price of cloud computing and database software maker Oracle plummeted on Thursday, ending the session around 11% lighter after revenue earnings missed analyst expectations on Wednesday.
It dragged other AI-related names down with it despite a record-breaking rally elsewhere on Wall Street, suggesting investors are rotating out of tech into other parts of the market.
Despite booming demand for Oracle’s artificial intelligence infrastructure, it posted mixed results this week. Revenue came in at $16.06 billion, compared with $16.21 billion expected by analysts, according to data compiled by LSEG.
It followed widespread speculation around the long-term health of the company, with investors cautious about its reliance on debt to execute its AI infrastructure build-out. The broader industry’s circular dealmaking has also raised eyebrows.
“We think recent investor scrutiny on artificial intelligence’s potential and circular GPU deals can be overly punitive to key AI suppliers like Oracle,” said Morningstar Equity Analyst Luke Yang. “Oracle remains a respectable cloud provider that enjoys strong switching costs across its database, application, and infrastructure lineup.”
That said, the firm reduced its fair value estimate for wide-moat Oracle to $286 per share, down from $340. Morningstar’s moat rating refers to its assessment of a company’s durable competitive advantage.
“We lowered our long-term earnings outlook as delivering Oracle’s planned capacity on time now proved to be a harder task. However, we continue to view shares as undervalued,” Yang added.
Traders work on the floor of the New York Stock Exchange on Dec. 11, 2025, in New York City.
Spencer Platt | Getty Images
The S&P 500 and Dow Jones Industrial Average advanced on Thursday, with both hitting fresh closing records. The Russell 2000 index also ended the session at a new high, following the U.S. Federal Reserve’s quarter-point cut on Wednesday.
But if investors analyze Thursday’s individual stock movements, they will see not all is well with the AI play yet. Oracle shares plunged nearly 11%, a day after it reported weak quarterly revenue, higher capital expenditure and long-term lease commitments. Oracle’s slide dragged down AI-related names such as Nvidia and Micron.
In extended trading, Broadcom shares fell 4.5%. The chipmaker beat Wall Street’s expectations for earnings and revenue, but CEO Hock Tan appeared to have failed to address worries that their largest customer, Google, might eventually make more of its chips in-house. Rising memory prices would also pressure margins, while the company’s chip deal with OpenAI might not be binding.
That’s why the tech-heavy Nasdaq Composite fell 0.26% despite other major U.S. indexes hitting records. Putting the two together, that means investors are rotating out of tech into other parts of the market. The S&P 500 financials sector, for instance, closed at a fresh record, buoyed by jumps in Visa and Mastercard.
Even though the AI theme seems to be under scrutiny, other sectors are performing well on the back of a resilient U.S. economy — as signaled by Fed officials on Wednesday — and buoyed by interest-rate cut. So long as nothing throws a spanner in the works, looks like we’re all set for a happy holiday season.
— CNBC’s Kristina Partsinevelos contributed to this report.
Disney to invest $1 billion in OpenAI. The media giant will also allow Sora, OpenAI’s video generator, to use its copyrighted characters, under a $1 billion licensing agreement. “We think this is a good investment for the company,” Disney CEO Bob Iger told CNBC.
Reddit launches legal challenge in Australia. The county introduced a ban on social media for teens under 16, which came into effect on Wednesday. Reddit argues that the law is “invalid on the basis of the implied freedom of political communication.”
[PRO] Where will Oracle go from here? Analysts are re-looking their price targets for Oracle stock after the firm released a disappointing and confusing earnings report on Wednesday.
And finally…
Gen. David Petraeus, Former CIA Director, Fmr. Central Commander and American commander in Iraq.
White House’s new national security strategy gave Europe a scare last week as it warned the region faced “civilizational erasure” and questioned whether it could remain a geopolitical partner for America.
The strategy was, “in a way, going after the Europeans but, frankly, some of the Europeans needed to be gotten after because I watched as four different presidents tried to exhort the Europeans to do more for their own defense and now that’s actually happening,” David Petraeus, former CIA director and four-star U.S. Army general, told CNBC’s Dan Murphy in Abu Dhabi on Thursday.
Reddit, the popular community-focused forum, has launched a legal challenge against Australia’s social media ban for teens under 16, arguing that the newly enacted law is ineffective and goes too far by restricting political discussion online.
In its application to Australia’s High Court, the social news and aggregation platform said the law is “invalid on the basis of the implied freedom of political communication”, saying that it burdens political communication.
Canberra’s ban came into effect on Wednesday and targeted 10 major services, including Alphabet‘s YouTube, Meta’s Instagram, ByteDance’s TikTok, Reddit, Snapchat and Elon Musk’s X. All targeted platforms had agreed to comply with the policy to varying degrees.
Australia’s Prime Minister’s office, Attorney-General’s Department and other social media platforms did not immediately reply to requests for comment.
Under the law, the targeted platforms will have to take “reasonable steps” to prevent underage access, using age–verification methods such as inference from online activity, facial estimation via selfies, uploaded IDs, or linked bank details.
Reddit’s application to the courts seeks to either declare the law invalid or exclude the platform from the provisions of the law.
In a statement to CNBC, Reddit said that while it agrees with the importance of protecting persons under 16, the law could isolate teens “from the ability to engage in age-appropriate community experiences (including political discussions).”
It also said in its application that the law “burdens political communication,” saying “the political views of children inform the electoral choices of many current electors, including their parents and their teachers, as well as others interested in the views of those soon to reach the age of maturity.”
The platform also argued that it should not be subject to the law, saying it operates more as a forum for adults facilitating “knowledge sharing” between users than as a traditional social network, saying that it does not import contact lists or address books.
“Reddit is significantly different from other sites that allow for users to become “friends” with one another, or to post photos about themselves, or to organise events,” the platform said in its application.
Reddit further said in its court filing that most content on its platform is accessible without an account, and pointed out a person under the age of 16 “can be more easily protected from online harm if they have an account, being the very thing that is prohibited.”
“That is because the account can be subject to settings that limit their access to particular kinds of content that may be harmful to them,” it adds.
Despite its objections, Reddit said that the challenge was not an attempt to avoid complying with the law, nor was it an effort to retain young users for business reasons.
“There are more targeted, privacy-preserving measures to protect young people online without resorting to blanket bans,” the platform said.