OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.
Jason Redmond | AFP | Getty Images
A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote in the open letter.
OpenAI, Google, Microsoft, Meta and other companies are at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.
The current and former employees wrote AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
The letter also details the current and former employees’ concerns about insufficient whistleblower protections for the AI industry, stating that without effective government oversight, employees are in a relatively unique position to hold companies accountable.
“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company’s board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if internal reporting processes fail.
Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also included Ramana Kumar, who formerly worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three famed computer scientists known for advancing the artificial intelligence field also endorsed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell.
“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson told CNBC, adding that the company has an anonymous integrity hotline, as well as a Safety and Security Committee led by members of the board and OpenAI leaders.
Microsoft declined to comment.
Mounting controversy for OpenAI
Last month, OpenAI backtracked on a controversial decision to make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo, viewed by CNBC, was sent to former employees and shared with current ones.
The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” an OpenAI spokesperson told CNBC at the time.
Tuesday’s open letter also follows OpenAI’s decision last month to disband its team focused on the long-term risks of AI just one year after the Microsoft-backed startup announced the group, a person familiar with the situation confirmed to CNBC at the time.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.
The team’s disbandment followed team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announcing their departures from the startup last month. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.
Jack Guez | AFP | Getty Images
CEO Sam Altman said on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Leike wrote he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”
The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.
In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently candid in his communications with the board.”
The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.
Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.
American actress Scarlett Johansson at Cannes Film Festival 2023. Photocall of the film Asteroid City. Cannes (France), May 24th, 2023
Meanwhile, last month, OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities, the company’s latest effort to expand the use of its popular chatbot. One week after OpenAI debuted the range of audio voices, the company announced it would pull one of the viral chatbot’s voices named “Sky.”
“Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.
Nvidia is developing software that could provide location verification for its AI graphics processing units (GPUs), a move that comes as Washington ramps up efforts to prevent restricted chips from being used in countries like China.
The opt-in service uses a client software agent that Nvidia chip customers can install to monitor the health of their AI GPUs, the company said in a blog post on Wednesday.
Nvidia also said that customers “will be able to visualize their GPU fleet utilization in a dashboard, globally or by compute zones — groups of nodes enrolled in the same physical or cloud locations.”
However, Nvidia told CNBC in a statement that the latest software does not give the company or outside actors the ability to disable its chips.
“There is no kill switch,” it added. “For GPU health, there are no features that allow NVIDIA to remotely control or take action on registered systems. It is read–only telemetry sent to NVIDIA.”
Telemetry is the automated process of collecting and transmitting data from remote or inaccessible sources to a central location for monitoring, analysis and optimization.
The ability to locate a device depends on the type of sensor data collected and transmitted, such as IP-based network information, timestamps, or other system-level signals that can be mapped to physical or cloud locations.
A screenshot of the software posted on Nvidia’s blog showed details such as the machine’s IP address and location.
A screenshot of the software posted on Nvidia’s blog showed details such as the machine’s IP address and location.
Nvidia blog screenshot | Opt-In NVIDIA Software Enables Data Center Fleet Management
Lukasz Olejnik, a senior research fellow at the Department of War Studies, King’s College London, said that while Nvidia indicated that its GPUs do not have hardware tracking technology, the blog did not specify if the data “uses customer input, network data, cloud provider metadata, or other methods.”
“In principle, also, the sent data contains metadata like network address, which may enable location in practice,” Olejnik, who is also an independent consultant, told CNBC.
The software could also detect any unexpected usage patterns that differ from what was declared, he added.
The latest features from Nvidia follow calls by lawmakers in Washington for the company to outfit its chips with tracking software that could help enforce export controls.
Those rules bar Nvidia from selling its more advanced AI chips to companies in China and other prohibited locations without a special license. While Trump has recently said he plans to roll back some of these export restrictions, those on Nvidia’s cutting-edge chips will remain in place.
In May, Senator Tom Cotton and a bipartisan group of eight lawmakers introduced the Chip Security Act, which, if passed, would mandate security mechanisms and location verification in advanced AI chips.
“Firms affected by U.S. export controls or China-related restrictions could use the system to verify and prove their GPU fleets remain in approved locations and state, and demonstrate compliant usage to regulators,” Olejn noted.
“That could actually help in compliance and indirectly on investment outlook positively.”
Pressure on Nvidia has intensified after Justice Department investigations into alleged smuggling rings that moved over $160 million in Nvidia chips to China.
However, Chinese officials have pushed back, warning Nvidia against equipping its chips with tracking features, as well as “potential backdoors and vulnerabilities.”
Following a national security investigation into some of Nvidia’s chips to check for these backdoors, Chinese officials have prevented local tech companies from purchasing products from the American chip designer.
Despite a green light from U.S. President Donald Trump for Nvidia to ship its previously restricted H200 chips to China, Beijing is reportedly undecided about whether to permit the imports.
Oracle shares plummeted 11% in premarket trading on Thursday, extending yesterday’s losses after the firm reported disappointing results.
The cloud computing and database software maker reported lower-than-expected quarterly revenue on Wednesday, despite booming demand for its artificial intelligence infrastructure. Its revenue came in at $16.06 billion, compared with $16.21 billion expected by analysts, according to data compiled by LSEG.
It dragged other AI-related names down with it. Chip darling Nvidia was last seen down 1.5% in premarket trading, memory and storage firm Micron was 1.4% lower, tech heavyweight Microsoft dipped 0.9%, cloud company Coreweave slid 3% and AMD was 1.3% in negative territory.
Oracle has been the subject of much market chatter since raising $18 billion in a jumbo bond sale in September, marking one of the largest debt issuances for the tech industry on record. The name shot onto investor agendas when it inked a $300 billion deal with OpenAI in the same month. Oracle made further moves into cloud infrastructure, where it battles Big Tech names such as Amazon, Microsoft and Google for AI contracts.
Global investors have questioned Oracle’s aggressive AI infrastructure build-out plans and whether it needs such a colossal amount of debt to execute, though other tech firms have also recently issued corporate bonds.
Oracle specifically has secured billions of dollars of construction loans through a consortium of banks tied to data centers in New Mexico and Wisconsin. The firm will raise roughly $20 billion to $30 billion in debt every year for the next three years, according to estimates by Citi analyst Tyler Radke.
Its share price has moved 34% higher year-to-date despite recent losses.
Google DeepMind, the tech giant’s AI unit, unveiled plans for its first “automated research lab” in the U.K. as it signs a partnership that could lead to the company deploying its latest models in the country.
The AI company will open the lab, which will use AI and robotics to run experiments, in the U.K. next year. It will focus on developing new superconductor materials, which can be used to develop medical imaging tech, alongside new materials for semiconductors.
British scientists will gain “priority access” to some of the world’s most advanced AI tools under the partnership, the U.K. government said in its announcement.
Founded in London in 2010 by Nobel prize winner Demis Hassabis, DeepMind was acquired by Google in 2014, but has retained a large operational base in the U.K. The company has made several breakthroughs considered crucial to advancing AI technology.
The partnership could also lead to DeepMind working with the government on AI research in areas like nuclear fusion and deploying its Gemini models across government and education in the U.K, the government said.
“DeepMind serves as the perfect example of what UK-US tech collaboration can deliver – a firm with roots on both sides of the Atlantic backing British innovators to shape the curve of technological progress,” said U.K. Technology Secretary Liz Kendall in a statement.
“This agreement could help to unlock cleaner energy, smarter public services, and new opportunities which will benefit communities up and down the country,” she said.
“AI has incredible potential to drive a new era of scientific discovery and improve everyday life,” said Hassabis.
“We’re excited to deepen our collaboration with the UK government and build on the country’s rich heritage of innovation to advance science, strengthen security, and deliver tangible improvements for citizens.”
The U.K. has been racing to sign deals with major tech companies as it tries to build out its AI infrastructure and public deployment of the technology, since the publication of a national strategy for AI in January.
Microsoft, Nvidia, Google and OpenAI announced plans to funnel over $40 billion of investment into new AI infrastructure in the country in September, during a state visit by U.S. President Donald Trump.