OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.
Jason Redmond | AFP | Getty Images
A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote in the open letter.
OpenAI, Google, Microsoft, Meta and other companies are at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.
The current and former employees wrote AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
The letter also details the current and former employees’ concerns about insufficient whistleblower protections for the AI industry, stating that without effective government oversight, employees are in a relatively unique position to hold companies accountable.
“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company’s board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if internal reporting processes fail.
Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also included Ramana Kumar, who formerly worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three famed computer scientists known for advancing the artificial intelligence field also endorsed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell.
“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson told CNBC, adding that the company has an anonymous integrity hotline, as well as a Safety and Security Committee led by members of the board and OpenAI leaders.
Microsoft declined to comment.
Mounting controversy for OpenAI
Last month, OpenAI backtracked on a controversial decision to make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo, viewed by CNBC, was sent to former employees and shared with current ones.
The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” an OpenAI spokesperson told CNBC at the time.
Tuesday’s open letter also follows OpenAI’s decision last month to disband its team focused on the long-term risks of AI just one year after the Microsoft-backed startup announced the group, a person familiar with the situation confirmed to CNBC at the time.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.
The team’s disbandment followed team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announcing their departures from the startup last month. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.
Jack Guez | AFP | Getty Images
CEO Sam Altman said on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Leike wrote he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”
The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.
In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently candid in his communications with the board.”
The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.
Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.
American actress Scarlett Johansson at Cannes Film Festival 2023. Photocall of the film Asteroid City. Cannes (France), May 24th, 2023
Meanwhile, last month, OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities, the company’s latest effort to expand the use of its popular chatbot. One week after OpenAI debuted the range of audio voices, the company announced it would pull one of the viral chatbot’s voices named “Sky.”
“Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.
Liz Reid, vice president, search, Google speaks during an event in New Delhi on December 19, 2022.
Sajjad Hussain | AFP | Getty Images
Testimony in Google‘s antitrust search remedies trial that wrapped hearings Friday shows how the company is calculating possible changes proposed by the Department of Justice.
Google head of search Liz Reid testified in court Tuesday that the company would need to divert between 1,000 and 2,000 employees, roughly 20% of Google’s search organization, to carry out some of the proposed remedies, a source with knowledge of the proceedings confirmed.
The testimony comes during the final days of the remedies trial, which will determine what penalties should be taken against Google after a judge last year ruled the company has held an illegal monopoly in its core market of internet search.
The DOJ, which filed the original antitrust suit and proposed remedies, asked the judge to force Google to share its data used for generating search results, such as click data. It also asked for the company to remove the use of “compelled syndication,” which refers to the practice of making certain deals with companies to ensure its search engine remains the default choice in browsers and smartphones.
Read more CNBC tech news
Google pays Apple billions of dollars per year to be the default search engine on iPhones. It’s lucrative for Apple and a valuable way for Google to get more search volume and users.
Apple’s SVP of Services Eddy Cue testified Wednesday that Apple chooses to feature Google because it’s “the best search engine.”
The DOJ also proposed the company divest its Chrome browser but that was not included in Reid’s initial calculation, the source confirmed.
Reid on Tuesday said Google’s proprietary “Knowledge Graph” database, which it uses to surface search results, contains more than 500 billion facts, according to the source, and that Google has invested more than $20 billion in engineering costs and content acquisition over more than a decade.
“People ask Google questions they wouldn’t ask anyone else,” she said, according to the source.
Reid echoed Google’s argument that sharing its data would create privacy risks, the source confirmed.
Closing arguments for the search remedies trial will take place May 29th and 30th, followed by the judge’s decision expected in August.
The company faces a separate remedies trial for its advertising tech business, which is scheduled to begin Sept. 22.
From left, Parker Conrad, co-founder and CEO of Rippling, and Kleiner Perkins investor Ilya Fushman speak at the venture firm’s Fellows Founders Summit in San Francisco in September 2022.
Rippling
Human resources software startup Rippling said Friday that its valuation has swelled to $16.8 billion in its latest fundraising round.
The company raised $450 million in the round, and has committed to buying an additional $200 million worth of shares from current and previous employees. The company’s valuation is up from $13.5 billion in a round a year ago.
Rippling said there was no lead investor. Baillie Gifford, Elad Gil, Goldman Sachs Growth and others participated in the round, according to a statement from the San Francisco-based company.
With the tech IPO market mostly dormant over the past three-plus years, and President Donald Trump’s new tariffs on imports leading several companies to delay planned offerings, the most high-profile late-stage tech startups continue to tap private markets for growth capital. Rippling co-founder and CEO Parker Conrad told CNBC in an interview the the company isn’t planning for an IPO in the near future.
Conrad also highlighted a change that’s taken place in public markets in recent years, since inflation began soaring in late 2021, followed by higher interest rates. With concerns about the economy swirling, many tech companies downsized and took other steps toward generating and preserving cash.
“It does look a lot like, in order to be successful in the public markets, your growth rates have to come down so that you can be profitable,” said Conrad, who avoided enacting layoffs. “And so for us, that sort of pushes things out until the company looks profitable and probably slower growing, right?”
At Rippling, annual revenue growth is well over 30%, Conrad said, though he didn’t provide an updated sales figure. The information reported last year that Rippling doubled annual recurring revenue to over $350 million by the end of 2023 from a year prior.
Given the pace of expansion, Conrad said he isn’t fixated on profits at the moment at Rippling, which ranked 14th on CNBC’s Disruptor 50 list.
Rippling offers payroll services, device management and corporate credit cards, among other products. Competitors include ADP, Paychex, Paycom Software and Paylocity.
There’s also privately held Deel, which Rippling sued in March for allegedly deploying a spy who collected confidential information. Conrad suggested that the publicity surrounding the case may be boosting business.
“I think it’s too early to say, looking at the data, how all of this is going to evolve from a market perspective, but certainly we see some companies that have said, ‘Hey, we’re talking to Rippling because of this,'” Conrad said.
Fortnite was booted from iPhones and Apple’s App Store in 2020, after Epic Games updated its software to link out to the company’s website and avoid Apple’s commissions. The move drew Apple’s anger, and kicked off a legal battle that has lasted for years.
Last month’s ruling, a victory for Epic Games, said that Apple was not allowed to charge a commission on link-outs or dictate if the links look like buttons, paving the way for Fortnite’s return.
Apple could still reject Fortnite’s submission. An Apple representative didn’t respond to a request for comment. Apple is appealing last month’s contempt ruling.
The announcement by Epic Games is the latest salvo in the battle between it and Apple, which has taken place in courts and with regulators around the world since 2020. Epic Games also sued Google, which operates the Play Store for Android phones.
Last month’s ruling has already shifted the economics of app development for iPhones.
Apple takes between 15% and 30% of purchases made using its in-app payment system. Linking to the web avoids those fees. Apple briefly allowed link-outs under its system but would charge a 27% commission, before last month’s ruling.
Developers including Amazon and Spotify have already updated their apps to avoid Apple’s commissions and direct customers to their own websites for payment.
Before last month, Amazon’s Kindle app told users they could not purchase a book in the iPhone app. After a recent update, the app now shows an orange “Get Book” button that links to Amazon’s website.
Fortnite has been available for iPhones in Europe since last year, through Epic Games’ store. Third-party app stores are allowed in Europe under the Digital Markets Act. Users have also been able to play Fortnite on iPhones and iPad through cloud gaming services.