Connect with us

Published

on

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. 

Jason Redmond | AFP | Getty Images

A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote in the open letter.

OpenAI, Google, Microsoft, Meta and other companies are at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.

The current and former employees wrote AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

The letter also details the current and former employees’ concerns about insufficient whistleblower protections for the AI industry, stating that without effective government oversight, employees are in a relatively unique position to hold companies accountable.

“Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company’s board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if internal reporting processes fail.

Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also included Ramana Kumar, who formerly worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three famed computer scientists known for advancing the artificial intelligence field also endorsed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell.

“We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson told CNBC, adding that the company has an anonymous integrity hotline, as well as a Safety and Security Committee led by members of the board and OpenAI leaders.

Microsoft declined to comment.

Mounting controversy for OpenAI

Last month, OpenAI backtracked on a controversial decision to make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo, viewed by CNBC, was sent to former employees and shared with current ones.

The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” an OpenAI spokesperson told CNBC at the time.

Tuesday’s open letter also follows OpenAI’s decision last month to disband its team focused on the long-term risks of AI just one year after the Microsoft-backed startup announced the group, a person familiar with the situation confirmed to CNBC at the time.

The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.

The team’s disbandment followed team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announcing their departures from the startup last month. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.

Jack Guez | AFP | Getty Images

CEO Sam Altman said on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.

In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently candid in his communications with the board.”

The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.

Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.

American actress Scarlett Johansson at Cannes Film Festival 2023. Photocall of the film Asteroid City. Cannes (France), May 24th, 2023

Mondadori Portfolio | Mondadori Portfolio | Getty Images

Meanwhile, last month, OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities, the company’s latest effort to expand the use of its popular chatbot. One week after OpenAI debuted the range of audio voices, the company announced it would pull one of the viral chatbot’s voices named “Sky.”

“Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.

Continue Reading

Technology

How VPNs might allow Americans to continue using TikTok

Published

on

By

How VPNs might allow Americans to continue using TikTok

Dado Ruvic | Reuters

If TikTok does indeed go dark on Sunday for Americans, there may be a tool for them to continue accessing the popular social app: VPNs. 

The Chinese-owned app is set to be removed from mobile app stores and the web for U.S. users on Sunday as a result of a law signed by President Joe Biden in April 2024 requiring that the app be sold to a qualified buyer before the deadline. 

Barring a last-minute sale or reprieve from the Supreme Court, the app will almost certainly vanish from the app stores for iPhones and Android phones. It won’t be removed from people’s phones, but the app could stop working. 

TikTok plans to shut its service for Americans on Sunday, meaning that even those who already have the app downloaded won’t be able to continue using it, according to reports this week from Reuters and The Information. Apple and Google didn’t comment on their plans for taking down the apps from their app stores on Sunday.

“Basically, an app or a website can check where users came from,” said Justas Palekas, a head of product at IProyal.com, a proxy service. “Based on that, then they can impose restrictions based on their location.”

Masking your physical internet access point

That may stop most users, but for the particularly driven Americans, using VPNs might allow them to continue using the app. 

VPNs and a related business-to-business technology called proxies work by tunneling a user’s internet traffic through a server in another country, making it look like they are accessing the internet from a location different than the one they are physically in. 

This works because every time a computer connects to the internet, it is identified through an IP number, which is a 12-digit number that is different for every single computer. The first six digits of the number identifies the network, which also includes information about the physical region the request came from.

In China, people have used VPNs for years to get around the country’s firewall, which blocks U.S. websites such as Google and Facebook. VPNs saw big spikes in traffic when India banned TikTok in 2020, and people often use VPNs to watch sporting events from countries where official broadcasts aren’t available. 

As of 2022, the VPN market was worth nearly $38 billion, according to the VPN Trust Initiative, a lobbying group.

“We consistently see significant spikes in VPN demand when access to online platforms is restricted, and this situation is no different,” said Lauren Hendry Parsons, privacy advocate at ExpressVPN, a VPN provider that costs $5 per month to use.

“We’re not here to endorse TikTok, but the looming U.S. ban highlights why VPNs matter— millions rely on them for secure, private, and unrestricted access to the internet,” ProtonVPN posted on social media earlier this week. ProtonVPN offers its service for $10 a month. 

The price of VPNs

Both ExpressVPN and ProtonVPN allow users to set their internet-access location. 

Most VPN services charge a monthly fee to pay for their servers and traffic, but some use a business model where they collect user data or traffic trends, such as when Meta offered a free VPN so it could keep an eye on which competitors’ apps were growing quickly.

A key tradeoff for those who use VPN is speed due to requests having to flow through a middleman computer to mask a users’ physical location. 

And although VPNs have worked in the past when governments have banned apps, that doesn’t ensure that VPNs will work if TikTok goes dark. It won’t be clear if ExpressVPN would be able to access TikTok until after the ban takes place, Parsons told CNBC in an email. It’s also possible that TikTok may be able to determine Americans who try to use VPNs to access the app.  

(L-R) Sarah Baus of Charleston, S.C., holds a sign that reads “Keep TikTok” as she and other content creators Sallye Miley of Jackson, Mississippi, and Callie Goodwin of Columbia, S.C., stand outside the U.S. Supreme Court Building as the court hears oral arguments on whether to overturn or delay a law that could lead to a ban of TikTok in the U.S., on January 10, 2025 in Washington, DC. 

Andrew Harnik | Getty Images

VPNs and proxies to evade regional restrictions have been part of the internet’s landscape for decades, but their use is increasing as governments seek to ban certain services or apps.

Apps are removed by government request all the time. Nearly 1500 apps were removed in regions due to government takedown demands in 2023, according to Apple, with over 1,000 of them in China. Most of them are fringe apps that break laws such as those against gambling, or Chinese video game rules, but increasingly, countries are banning apps for national security or economic development reasons.

Now, the U.S. is poised to ban one of the most popular apps in the country — with 115 million users, it was the second most downloaded app of 2024 across both iOS and Android, according to an estimate provided to CNBC from Sensor Tower, a market intelligence firm.

“As we witness increasing attempts to fragment and censor the internet, the role of VPNs in upholding internet freedom is becoming increasingly critical,” Parsons said.

WATCH: Chinese TikTok alternative surges

Chinese TikTok alternative surges

Continue Reading

Technology

YouTube donating $15 million in LA wildfire relief, support for creators days before TikTok ban

Published

on

By

YouTube donating  million in LA wildfire relief, support for creators days before TikTok ban

Charred remains of buildings are pictured following the Palisades Fire in the Pacific Palisades neighborhood in Los Angeles, California, U.S. Jan. 15, 2025. 

Mike Blake | Reuters

Google and YouTube will donate $15 million to support the Los Angeles community and content creators impacted by wildfires, YouTube CEO Neal Mohan announced in a blog post Wednesday.

The contributions will flow to local relief organizations including Emergency Network Los Angeles, the American Red Cross, the Center for Disaster Philanthropy and the Institute for Nonprofit News, the blog said. When the company’s LA offices can safely reopen, impacted creators will also be able to use YouTube’s production facilities “to recover and rebuild their businesses” as well as access community events.

“To all of our employees, the YouTube creator community, and everyone in LA, please stay safe and know we’re here to support,” Google CEO Sundar Pichai posted on X.

The move comes days before Sunday’s impending TikTok ban that has already seen content creators begin asking fans to follow them on other social platforms. YouTube Shorts, a short-form video platform within YouTube, is a competitor to TikTok, along with Meta’s Instagram Reels and the fast-growing Chinese app Rednote, otherwise known as Xiahongshu.

Read more CNBC tech news

“In moments like these, we see the power of communities coming together to support each other — and the strength and resilience of the YouTube community is like no other,” Mohan wrote.

YouTube’s contributions are in line with a host of other LA companies pledging multi-million dollar donations aimed at assisting employees and residents impacted by the LA fires. Meta announced a $4 million donation split between CEO Mark Zuckerberg and the company while both Netflix and Comcast pledged $10 million donations to multiple aid groups.

Disclosure: Comcast owns NBCUniversal, the parent company of CNBC.

WATCH: TikTok: What creators would do if the short-form video app goes dark

TikTok: What creators would do if the short-form video app goes dark

Continue Reading

Technology

TikTok’s U.S. operations could be worth as much as $50 billion if ByteDance decides to sell

Published

on

By

TikTok’s U.S. operations could be worth as much as  billion if ByteDance decides to sell

Jakub Porzycki | Nurphoto | Getty Images

Business moguls such as Elon Musk should be prepared to spend tens of billions of dollars for TikTok’s U.S. operations should parent company ByteDance decide to sell. 

TikTok is staring at a potential ban in the U.S. if the Supreme Court decides to uphold a national security law in which service providers such as Apple and Google would be penalized for hosting the app after the Sunday deadline. ByteDance has not indicated that it will sell the app’s U.S. unit, but the Chinese government has considered a plan in which X owner Musk would acquire the operations, as part of several scenarios in consideration, Bloomberg News reported Monday.

If ByteDance decides to sell, potential buyers may have to spend between $40 billion and $50 billion. That’s the valuation that CFRA Research Senior Vice President Angelo Zino has estimated for TikTok’s U.S. operations. Zino based his valuation on estimates of TikTok’s U.S. user base and revenue in comparison to rival apps. 

TikTok has about 115 million monthly mobile users in the U.S., which is slightly behind Instagram’s 131 million, according to an estimate by market intelligence firm Sensor Tower. That puts TikTok ahead of Snapchat, Pinterest and Reddit, which have U.S. monthly mobile user bases of 96 million, 74 million and 32 million, according to Sensor Tower.

Zino’s estimate, however, is down from the more than $60 billion that he estimated for the unit in March 2024, when the House passed the initial national security bill that President Joe Biden signed into law the following month.

The lowered estimate is due to TikTok’s current geopolitical predicament and because “industry multiples have come in a bit” since March, Zino told CNBC in an email. Zino’s estimate doesn’t include TikTok’s valuable recommendation algorithms, which a U.S. acquirer would not obtain as part of a deal, with the algorithms and their alleged ties to China being central to the U.S. government’s case that TikTok poses a national security threat.

Analysts at Bloomberg Intelligence have their estimate for TikTok’s U.S. operations pegged in the range of $30 billion to $35 billion. That’s the estimate they published in July, saying at the time that the value of the unit would be “discounted due to it being a forced sale.”  

Bloomberg Intelligence analysts noted that finding a buyer for TikTok’s U.S. operations that can both afford the transaction and deal with the accompanying regulatory scrutiny on data privacy makes a sale challenging. It could also make it difficult for a buyer to expand TikTok’s ads business, they wrote. 

A consortium of businesspeople including billionaire Frank McCourt and O’Leary Ventures Chairman Kevin O’Leary put in a bid to buy TikTok from ByteDance. O’Leary has previously said the group would be willing to pay up to $20 billion to acquire the U.S. assets without the algorithm.

Unlike a Musk bid, O’Leary’s group’s bid would be free from regulatory scrutiny, O’Leary said in a Monday interview with Fox News.

O’Leary said that he’s “a huge Elon Musk fan,” but added “the idea that the regulator, even under Trump’s administration, would allow this is pretty slim.”

TikTok, X and O’Leary Ventures did not respond to requests for comment.

Watch: Chinese TikTok alternative surges

Chinese TikTok alternative surges

Continue Reading

Trending