Connect with us

Published

on

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz

Elizabeth Frantz | Reuters

At most tech CEO hearings in recent years, lawmakers have taken a contentious tone, grilling executives over their data-privacy practices, competitive methods and more.

But at Tuesday’s hearing on AI oversight including OpenAI CEO Sam Altman, lawmakers seemed notably more welcoming toward the ChatGPT maker. One senator even went as far as asking whether Altman would be qualified to administer rules regulating the industry.

Altman’s warm welcome on Capitol Hill, which included a dinner discussion the night prior with dozens of House lawmakers and a separate speaking event Tuesday afternoon attended by House Speaker Kevin McCarthy, R-Calif., has raised concerns from some AI experts who were not in attendance this week.

These experts caution that lawmakers’ decision to learn about the technology from a leading industry executive could unduly sway the solutions they seek to regulate AI. In conversations with CNBC in the days after Altman’s testimony, AI leaders urged Congress to engage with a diverse set of voices in the field to ensure a wide range of concerns are addressed, rather than focus on those that serve corporate interests.

OpenAI did not immediately respond to a request for comment on this story.

A friendly tone

For some experts, the tone of the hearing and Altman’s other engagements on the Hill raised alarm.

Lawmakers’ praise for Altman at times sounded almost like “celebrity worship,” according to Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at New York University.

“You don’t ask the hard questions to people you’re engaged in a fandom about,” she said.

“It doesn’t sound like the kind of hearing that’s oriented around accountability,” said Sarah Myers West, managing director of the AI Now Institute. “Saying, ‘Oh, you should be in charge of a new regulatory agency’ is not an accountability posture.”

West said the “laudatory” tone of some representatives following the dinner with Altman was surprising. She acknowledged it may “signal that they’re just trying to sort of wrap their heads around what this new market even is.”

But she added, “It’s not new. It’s been around for a long time.”

Safiya Umoja Noble, a professor at UCLA and author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” said lawmakers who attended the dinner with Altman seemed “deeply influenced to appreciate his product and what his company is doing. And that also doesn’t seem like a fair deliberation over the facts of what these technologies are.”

“Honestly, it’s disheartening to see Congress let these CEOs pave the way for carte blanche, whatever they want, the terms that are most favorable to them,” Noble said.

Real differences from the social media era?

OpenAI's Sam Altman testifies before Congress—Here are the key moments

At Tuesday’s Senate hearing, lawmakers made comparisons to the social media era, noting their surprise that industry executives showed up asking for regulation. But experts who spoke with CNBC said industry calls for regulation are nothing new and often serve an industry’s own interests.

“It’s really important to pay attention to specifics here and not let the supposed novelty of someone in tech saying the word ‘regulation’ without scoffing distract us from the very real stakes and what’s actually being proposed, the substance of those regulations,” said Whittaker.

“Facebook has been using that strategy for years,” Meredith Broussard, New York University professor and author of “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech,” said of the call for regulation. “Really, what they do is they say, ‘Oh, yeah, we’re definitely ready to be regulated.’… And then they lobby [for] exactly the opposite. They take advantage of the confusion.”

Experts cautioned that the kinds of regulation Altman suggested, like an agency to oversee AI, could actually stall regulation and entrench incumbents.

“That seems like a great way to completely slow down any progress on regulation,” said Margaret Mitchell, researcher and chief ethics scientist at AI company Hugging Face. “Government is already not resourced enough to well support the agencies and entities they already have.”

Ravit Dotan, who leads an AI ethics lab at the University of Pittsburgh as well as AI ethics at generative AI startup Bria.ai, said that while it makes sense for lawmakers to take Big Tech companies’ opinions into account since they are key stakeholders, they shouldn’t dominate the conversation.

“One of the concerns that is coming from smaller companies generally is whether regulation would be something that is so cumbersome that only the big companies are really able to deal with [it], and then smaller companies end up having a lot of burdens,” Dotan said.

Several researchers said the government should focus on enforcing the laws already on the books and applauded a recent joint agency statement that asserted the U.S. already has the power to enforce against discriminatory outcomes from the use of AI.

Dotan said there were bright spots in the hearing when she felt lawmakers were “informed” in their questions. But in other cases, she said she wished lawmakers had pressed Altman for deeper explanations or commitments.

For example, when asked about the likelihood that AI will displace jobs, Altman said that eventually it will create more quality jobs. While Dotan said she agreed with that assessment, she wished lawmakers had asked Altman for more potential solutions to help displaced workers find a living or gain skills training in the meantime, before new job opportunities become more widely available.

“There are so many things that a company with the power of OpenAI backed by Microsoft has when it comes to displacement,” Dotan said. “So to me, to leave it as, ‘Your market is going to sort itself out eventually,’ was very disappointing.”

Diversity of voices

A key message AI experts have for lawmakers and government officials is to include a wider array of voices, both in personal background and field of experience, when considering regulating the technology.

“I think that community organizations and researchers should be at the table; people who have been studying the harmful effects of a variety of different kinds of technologies should be at the table,” said Noble. “We should have policies and resources available for people who’ve been damaged and harmed by these technologies … There are a lot of great ideas for repair that come from people who’ve been harmed. And we really have yet to see meaningful engagement in those ways.”

Mitchell said she hopes Congress engages more specifically with people involved in auditing AI tools and experts in surveillance capitalism and human-computer interactions, among others. West suggested that people with expertise in fields that will be affected by AI should also be included, like labor and climate experts.

Whittaker pointed out that there may already be “more hopeful seeds of meaningful regulation outside of the federal government,” pointing to the Writers Guild of America strike as an example, in which demands include job protections from AI.

Government should also pay greater attention and offer more resources to researchers in fields like social sciences, who have played a large role in uncovering the ways technology can result in discrimination and bias, according to Noble.

“Many of the challenges around the impact of AI in society has come from humanists and social scientists,” she said. “And yet we see that the funding that is predicated upon our findings, quite frankly, is now being distributed back to computer science departments that work alongside industry.”

Noble said she was “stunned” to see that the White House’s announcement of funding for seven new AI research centers seemed to have an emphasis on computer science.

“Most of the women that I know who have been the leading voices around the harms of AI for the last 20 years are not invited to the White House, are not funded by [the National Science Foundation and] are not included in any kind of transformative support,” Noble said. “And yet our work does have and has had tremendous impact on shifting the conversations about the impact of these technologies on society.”

Noble pointed to the White House meeting earlier this month that included Altman and other tech CEOs, such as Google’s Sundar Pichai and Microsoft’s Satya Nadella. Noble said the photo of that meeting “really told the story of who has put themselves in charge. …The same people who’ve been the makers of the problems are now somehow in charge of the solutions.”

Bringing in independent researchers to engage with government would give those experts opportunities to make “important counterpoints” to corporate testimony, Noble said.

Still, other experts noted that they and their peers have engaged with government about AI, albeit without the same media attention Altman’s hearing received and perhaps without a large event like the dinner Altman attended with a wide turnout of lawmakers.

Mitchell worries lawmakers are now “primed” from their discussions with industry leaders.

“They made the decision to start these discussions, to ground these discussions in corporate interests,” Mitchell said. “They could have gone in a totally opposite direction and asked them last.”

Mitchell said she appreciated Altman’s comments on Section 230, the law that helps shield online platforms from being held responsible for their users’ speech. Altman conceded that outputs of generative AI tools would not necessarily be covered by the legal liability shield and a different framework is needed to assess liability for AI products.

“I think, ultimately, the U.S. government will go in a direction that favors large tech corporations,” Mitchell said. “My hope is that other people, or people like me, can at least minimize the damage, or show some of the devil in the details to lead away from some of the more problematic ideas.”

“There’s a whole chorus of people who have been warning about the problems, including bias along the lines of race and gender and disability, inside AI systems,” said Broussard. “And if the critical voices get elevated as much as the commercial voices, then I think we’re going to have a more robust dialogue.”

Subscribe to CNBC on YouTube.

WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Can China's ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Continue Reading

Technology

Quantum stocks Rigetti Computing and D-Wave surged double-digits this week. Here’s what’s driving the big move

Published

on

By

Quantum stocks Rigetti Computing and D-Wave surged double-digits this week. Here's what's driving the big move

Inside Google’s quantum computing lab in Santa Barbara, California.

CNBC

Quantum computing stocks are wrapping up a big week of double-digit gains.

Shares of Rigetti Computing, D-Wave Quantum and Quantum Computing have surged more than 20%. Rigetti and D-Wave Quantum have more than doubled and tripled, respectively, since the start of the year. Arqit Quantum skyrocketed more than 32% this week.

The jump in shares followed a wave of positive news in the quantum space.

Rigetti said it had purchase orders totalling $5.7 million for two of its 9-qubit Novera quantum computing systems. The owner of drugmaker Novo Nordisk and the Danish government also invested 300 million euros in a quantum venture fund.

In a blog post earlier this week, Nvidia also highlighted accelerated computing, which it argues can make “quantum computing breakthroughs of today and tomorrow possible.”

Investors have piled into quantum computing technology this year, as tech giants Microsoft, Nvidia and Amazon have embraced the technology with a wave of new chip announcements, multi-million dollar investments and research plans.

Read more CNBC tech news

Quantum computing is the most radical technology in history: Bank of America's Haim Israel

Continue Reading

Technology

How to get Sora app invite codes for OpenAI’s viral AI video creator

Published

on

By

How to get Sora app invite codes for OpenAI's viral AI video creator

Cfoto | Future Publishing | Getty Images

OpenAI’s new artificial intelligence video app Sora has already grabbed the top spot in Apple‘s App Store as its number one free app, despite being invite-only.

Sora, which was launched on Tuesday, allows users to create short-form AI videos and share them in a feed. The app is available to iPhone users but requires an invite code to access.

Here’s how to snag a Sora app invite code:

  • First, download the app from the iOS App Store. Note that Sora requires iOS 18.0 or later to be downloaded.
  • Login using your OpenAI account.
  • Click “Notify me when access opens.”

A screen will then appear asking for an access code.

Currently, OpenAI has said that it is prioritizing paying ChatGPT Pro users for Sora access. The app is only available in the U.S. and Canada, but is expected to roll out to additional countries soon, the company said.

Read more CNBC tech news

If you do not know someone who can provide an access code, several people are sharing invite codes on the official OpenAI Discord server, as well as on X and Reddit threads.

Once you input your access, you will be able to start generating AI videos using text or images. Users are also able to cameo as characters in their videos as well as “remix” other posts.

The app is powered by the new Sora 2.0 model, an updated version of the original Sora model from last year. The video generation model is “physically accurate, realistic, and more controllable” than prior systems, the company said in a blog post.

OpenAI's Sora 2 sparks AI 'slop' backlash

Continue Reading

Technology

OpenAI’s invite-only video generation app Sora tops Apple’s App Store

Published

on

By

OpenAI's invite-only video generation app Sora tops Apple’s App Store

Sopa Images | Lightrocket | Getty Images

OpenAI now has two of the top three free apps in Apple’s App Store, and its new video generation app Sora has snagged the coveted No. 1 spot.

The artificial intelligence startup launched Sora on Tuesday, and it allows users to generate short-form AI videos, remix videos created by other users and post them to a shared feed. Sora is only available on iOS devices and is invite-based, which means users need a code to access it.

Despite these restrictions, Sora has secured the top spot in the App Store, ahead of Google‘s Gemini and OpenAI’s generative chatbot ChatGPT.

“It’s been epic to see what the collective creativity of humanity is capable of so far,” Bill Peebles, head of Sora at OpenAI, wrote in a post on X on Friday. “Team is iterating fast and listening to feedback.”

Read more CNBC tech news

Sora is powered by OpenAI’s latest video and audio generation model called Sora 2. OpenAI said the model is capable of creating scenes and sounds with “a high degree of realism,” according to a blog post. The startup’s first video and audio generation model, Sora, was announced in February 2024.

OpenAI said it has taken steps to address potential safety concerns around the Sora app, including giving users explicit control over how their likeness is used on the platform. But some of the initial videos posted to the app, including one that depicts OpenAI CEO Sam Altman shoplifting, have sparked debates about its utility, potential for harm and legality.

“It is easy to imagine the degenerate case of AI video generation that ends up with us all being sucked into an RL-optimized slop feed,” Altman wrote in a post on X on Tuesday. “The team has put great care and thought into trying to figure out how to make a delightful product that doesn’t fall into that trap, and has come up with a number of promising ideas.”

WATCH: OpenAI’s Sora 2 sparks AI ‘slop’ backlash

OpenAI's Sora 2 sparks AI 'slop' backlash

Continue Reading

Trending