Connect with us

Published

on

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz

Elizabeth Frantz | Reuters

At most tech CEO hearings in recent years, lawmakers have taken a contentious tone, grilling executives over their data-privacy practices, competitive methods and more.

But at Tuesday’s hearing on AI oversight including OpenAI CEO Sam Altman, lawmakers seemed notably more welcoming toward the ChatGPT maker. One senator even went as far as asking whether Altman would be qualified to administer rules regulating the industry.

Altman’s warm welcome on Capitol Hill, which included a dinner discussion the night prior with dozens of House lawmakers and a separate speaking event Tuesday afternoon attended by House Speaker Kevin McCarthy, R-Calif., has raised concerns from some AI experts who were not in attendance this week.

These experts caution that lawmakers’ decision to learn about the technology from a leading industry executive could unduly sway the solutions they seek to regulate AI. In conversations with CNBC in the days after Altman’s testimony, AI leaders urged Congress to engage with a diverse set of voices in the field to ensure a wide range of concerns are addressed, rather than focus on those that serve corporate interests.

OpenAI did not immediately respond to a request for comment on this story.

A friendly tone

For some experts, the tone of the hearing and Altman’s other engagements on the Hill raised alarm.

Lawmakers’ praise for Altman at times sounded almost like “celebrity worship,” according to Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at New York University.

“You don’t ask the hard questions to people you’re engaged in a fandom about,” she said.

“It doesn’t sound like the kind of hearing that’s oriented around accountability,” said Sarah Myers West, managing director of the AI Now Institute. “Saying, ‘Oh, you should be in charge of a new regulatory agency’ is not an accountability posture.”

West said the “laudatory” tone of some representatives following the dinner with Altman was surprising. She acknowledged it may “signal that they’re just trying to sort of wrap their heads around what this new market even is.”

But she added, “It’s not new. It’s been around for a long time.”

Safiya Umoja Noble, a professor at UCLA and author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” said lawmakers who attended the dinner with Altman seemed “deeply influenced to appreciate his product and what his company is doing. And that also doesn’t seem like a fair deliberation over the facts of what these technologies are.”

“Honestly, it’s disheartening to see Congress let these CEOs pave the way for carte blanche, whatever they want, the terms that are most favorable to them,” Noble said.

Real differences from the social media era?

OpenAI's Sam Altman testifies before Congress—Here are the key moments

At Tuesday’s Senate hearing, lawmakers made comparisons to the social media era, noting their surprise that industry executives showed up asking for regulation. But experts who spoke with CNBC said industry calls for regulation are nothing new and often serve an industry’s own interests.

“It’s really important to pay attention to specifics here and not let the supposed novelty of someone in tech saying the word ‘regulation’ without scoffing distract us from the very real stakes and what’s actually being proposed, the substance of those regulations,” said Whittaker.

“Facebook has been using that strategy for years,” Meredith Broussard, New York University professor and author of “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech,” said of the call for regulation. “Really, what they do is they say, ‘Oh, yeah, we’re definitely ready to be regulated.’… And then they lobby [for] exactly the opposite. They take advantage of the confusion.”

Experts cautioned that the kinds of regulation Altman suggested, like an agency to oversee AI, could actually stall regulation and entrench incumbents.

“That seems like a great way to completely slow down any progress on regulation,” said Margaret Mitchell, researcher and chief ethics scientist at AI company Hugging Face. “Government is already not resourced enough to well support the agencies and entities they already have.”

Ravit Dotan, who leads an AI ethics lab at the University of Pittsburgh as well as AI ethics at generative AI startup Bria.ai, said that while it makes sense for lawmakers to take Big Tech companies’ opinions into account since they are key stakeholders, they shouldn’t dominate the conversation.

“One of the concerns that is coming from smaller companies generally is whether regulation would be something that is so cumbersome that only the big companies are really able to deal with [it], and then smaller companies end up having a lot of burdens,” Dotan said.

Several researchers said the government should focus on enforcing the laws already on the books and applauded a recent joint agency statement that asserted the U.S. already has the power to enforce against discriminatory outcomes from the use of AI.

Dotan said there were bright spots in the hearing when she felt lawmakers were “informed” in their questions. But in other cases, she said she wished lawmakers had pressed Altman for deeper explanations or commitments.

For example, when asked about the likelihood that AI will displace jobs, Altman said that eventually it will create more quality jobs. While Dotan said she agreed with that assessment, she wished lawmakers had asked Altman for more potential solutions to help displaced workers find a living or gain skills training in the meantime, before new job opportunities become more widely available.

“There are so many things that a company with the power of OpenAI backed by Microsoft has when it comes to displacement,” Dotan said. “So to me, to leave it as, ‘Your market is going to sort itself out eventually,’ was very disappointing.”

Diversity of voices

A key message AI experts have for lawmakers and government officials is to include a wider array of voices, both in personal background and field of experience, when considering regulating the technology.

“I think that community organizations and researchers should be at the table; people who have been studying the harmful effects of a variety of different kinds of technologies should be at the table,” said Noble. “We should have policies and resources available for people who’ve been damaged and harmed by these technologies … There are a lot of great ideas for repair that come from people who’ve been harmed. And we really have yet to see meaningful engagement in those ways.”

Mitchell said she hopes Congress engages more specifically with people involved in auditing AI tools and experts in surveillance capitalism and human-computer interactions, among others. West suggested that people with expertise in fields that will be affected by AI should also be included, like labor and climate experts.

Whittaker pointed out that there may already be “more hopeful seeds of meaningful regulation outside of the federal government,” pointing to the Writers Guild of America strike as an example, in which demands include job protections from AI.

Government should also pay greater attention and offer more resources to researchers in fields like social sciences, who have played a large role in uncovering the ways technology can result in discrimination and bias, according to Noble.

“Many of the challenges around the impact of AI in society has come from humanists and social scientists,” she said. “And yet we see that the funding that is predicated upon our findings, quite frankly, is now being distributed back to computer science departments that work alongside industry.”

Noble said she was “stunned” to see that the White House’s announcement of funding for seven new AI research centers seemed to have an emphasis on computer science.

“Most of the women that I know who have been the leading voices around the harms of AI for the last 20 years are not invited to the White House, are not funded by [the National Science Foundation and] are not included in any kind of transformative support,” Noble said. “And yet our work does have and has had tremendous impact on shifting the conversations about the impact of these technologies on society.”

Noble pointed to the White House meeting earlier this month that included Altman and other tech CEOs, such as Google’s Sundar Pichai and Microsoft’s Satya Nadella. Noble said the photo of that meeting “really told the story of who has put themselves in charge. …The same people who’ve been the makers of the problems are now somehow in charge of the solutions.”

Bringing in independent researchers to engage with government would give those experts opportunities to make “important counterpoints” to corporate testimony, Noble said.

Still, other experts noted that they and their peers have engaged with government about AI, albeit without the same media attention Altman’s hearing received and perhaps without a large event like the dinner Altman attended with a wide turnout of lawmakers.

Mitchell worries lawmakers are now “primed” from their discussions with industry leaders.

“They made the decision to start these discussions, to ground these discussions in corporate interests,” Mitchell said. “They could have gone in a totally opposite direction and asked them last.”

Mitchell said she appreciated Altman’s comments on Section 230, the law that helps shield online platforms from being held responsible for their users’ speech. Altman conceded that outputs of generative AI tools would not necessarily be covered by the legal liability shield and a different framework is needed to assess liability for AI products.

“I think, ultimately, the U.S. government will go in a direction that favors large tech corporations,” Mitchell said. “My hope is that other people, or people like me, can at least minimize the damage, or show some of the devil in the details to lead away from some of the more problematic ideas.”

“There’s a whole chorus of people who have been warning about the problems, including bias along the lines of race and gender and disability, inside AI systems,” said Broussard. “And if the critical voices get elevated as much as the commercial voices, then I think we’re going to have a more robust dialogue.”

Subscribe to CNBC on YouTube.

WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Can China's ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Continue Reading

Technology

Jeff Bezos says AI is in an ‘industrial bubble’ but society to get ‘gigantic’ benefits from the tech

Published

on

By

Jeff Bezos says AI is in an 'industrial bubble' but society to get 'gigantic' benefits from the tech

Amazon founder Jeff Bezos speaks with John Elkann, CEO of Exor and chairman of Ferrari at Italian Tech Week on October 3, 2025.

Arjun Kharpal | CNBC

TURIN, Italy — Artificial intelligence is currently in an “industrial bubble” but the technology is “real” and will bring big benefits to society, Amazon Founder Jeff Bezos said on Friday.

The term bubble usually refers to a period of inflated stock prices or valuations of companies that have disconnected from the fundamentals of a business. One of the most famous bubbles that burst was the 2000 dotcom crash where the value of internet companies plummeted.

Exor CEO John Elkann asked Bezos on stage at Italian Tech Week in Turin, Italy whether there were signs that the current AI industry is in bubble.

“This is a kind of industrial bubble,” the Amazon founder said.

Bezos laid out some of the key characteristics of bubbles, noting that when they happen, stock prices are “disconnected from the fundamentals” of a business.

“The second thing that happens is that people get very excited like they are today about artificial intelligence,” Bezos added.

During bubbles, every experiment or idea gets funded, he told the audience.

“The good ideas and the bad ideas. And investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. And that’s also probably happening today,” Bezos said.

“But that doesn’t mean anything that is happening isn’t real. AI is real, and it is going to change every industry.”

This is a breaking news story. Please refresh for updates.

Continue Reading

Technology

U.S. model upgrades are pushing AI startups to move fast – it’s unclear if Europe can keep up

Published

on

By

U.S. model upgrades are pushing AI startups to move fast – it's unclear if Europe can keep up

Founded in 2022, ElevenLabs is an AI voice generation startup based in London. It competes with the likes of Speechmatics and Hume AI.

Sopa Images | Lightrocket | Getty Images

Artificial intelligence companies are the hottest ticket items in today’s startup ecosystem but the pace of change is dominated by developments at OpenAI and Anthropic. For startups building on top of their models, it’s sink or swim. 

With the U.S. currently surging ahead in the large language model (LLM) race, which demands huge checks, Europe’s opportunity lies in building tools that make AI useful, which is known as the application layer.

“That’s also where we think most of the profit will be made in the future,” Robert Lacher, a founding partner of Visionaries Club, told CNBC’s “Squawk Box Europe” earlier this year

Generative AI companies clinched $49.2 billion in venture capital (VC) investment in the first half of 2025, surpassing 2024’s $44.2 billion across the whole year, according to consultancy EY. The U.S. is responsible for the majority of that, accounting for 97% of deal value and 62% of volume; Europe represented just 2% of value, but 23% of volume. 

Risk appetite among VC investors on the continent is typically lower than in the U.S., while market fragmentation has long caused challenges for startups looking to scale quickly. Hungover from the 2021 tech boom and amid an economic downturn, steady growth and sound business metrics have also come back into focus in Europe. AI is still drawing eyeballs but it pales in comparison to the U.S. 

Europe has 'huge opportunity' to focus on AI application layer, says European early-stage VC firm

Now, frequent updates of AI models like OpenAI’s ChatGPT and Anthropic’s Claude are pushing companies built on top of them to iterate faster or risk falling behind.

Europe does have its own LLM company – Mistral, the French startup that has raised 1.7 billion euros ($2 billion) in capital so far, including from Dutch chipmaker ASML – that is positioned as an open-source competitor to OpenAI, but there’s still a lot of ground to cover.

“The speed of innovation, speed of product velocity, speed of distribution, actually ends up winning over everything else,” Bryan Kim, a partner at VC firm Andreessen Horowitz, told CNBC’s “Squawk Box Europe” on Thursday from Italian Tech Week.

Sweden’s Lovable, a “vibe-coding” platform that enables others to build apps and websites with AI, and AI agent startup Sana are examples of such companies putting AI to use. Meanwhile, London’s AI video generation startup Synthesia and synthetic audio company ElevenLabs, also have specific AI applications. The latter did, however, later build its own LLM

Lovable CEO: Not entertaining any investments right now

But “what does it mean when the product and technology you’re actually relying on changes every month. How do you move any slower than that and expect to win the game?” Kim said.

“What I came around with is, actually, momentum is the moat at this current juncture of AI development. Maybe we’ll get to a point where the model layer stabilizes it a little bit, and then we could talk about other things, but, right now, momentum is the only moat that I see,” he added.

Building the next Spotify

Momentum – and the ability to constantly iterate – often comes down to bagging cash to scale.

“If you look at the Europeans, we are revolutionary, we are romantics, we are resourceful,” Jean La Rochebrochard, managing director at Kima Ventures, told “Squawk Box Europe” on Thursday. However, “it’s hard to compete with a country where the appetite for risk is way higher, where the amount of capital is way higher as well, and the talent,” he said, referring to the US and speaking about AI generally.

La Rochebrochard is still optimistic that Europe can be home to the next big winner. For him, founders who have built outside of Europe and return to start up another venture are ones to watch. 

“We do all hope that Mistral will become one of these behemoths, one of these $100 billion companies in Europe, just like Revolut did in the UK. If Revolut, Mistral and Spotify are doing it, why not another 10, 20, 50 others?” the investor added.

Indeed, British AI cloud company Nscale just nabbed $433 million in new funding, hot on the heels of a $1.1 billion Series B – the largest in Europe – announced just days ago. However, like Mistral, Nscale is an AI infrastructure play rather than application layer – a timely development as AI sovereignty continues to grab political and investor attention. 

For Lovable CEO Anton Osika, it’s much more simple. “The only thing we need to do in Europe is change our mindset that it is possible,” he told “Squawk Box Europe” on Tuesday.

“Traditionally it has been more of a constraint with access to the amount of technical talent, of access to capital, that is not the bottleneck anymore,” he argued. 

Osika’s own company, for example, can act as a CEO’s technical cofounder if they need one. Meanwhile, Lovable is also luring top talent from the U.S. to Sweden to work at the startup, Osika said. 

He added: “It’s much faster for us to hire in Europe than it is to do so for U.S. counterparts, where there’s 1,000 more companies like Lovable, so it is a competitive advantage to be building from Europe.”

Continue Reading

Technology

Silicon Valley’s new defense tech ‘neoprimes’ are pulling billions in funding to challenge legacy giants

Published

on

By

Silicon Valley’s new defense tech ‘neoprimes’ are pulling billions in funding to challenge legacy giants

Guvendemir | E+ | Getty Images

A wave of defense tech startups in Silicon Valley is drawing billions in funding and reshaping America’s national security.

Anduril Industries, recently valued at $30.5 billion following its latest funding round, is among the so-called “neoprimes” — companies challenging the dominance of legacy contractors, dubbed “primes,” such as Lockheed MartinNorthrop Grumman, Boeing, General Dynamics, and RTX (formerly Raytheon).

“There’s more money than ever going to what we call the ‘neoprimes'” Jameson Darby, co-founder and director of autonomy at investment syndicate MilVet Angels, or MVA, told CNBC. “It’s still a fraction of the overall budget, but the trend is all positive.”

Other examples of defense tech startups challenging the incumbents include SpaceX and Palantir Technologies, said Darby, who is also a founding member of the U.S. Department of Defense’s Defense Innovation Unit.

Unlike the primes, these startups are faster, leaner and software-first — with many of them building things that can help close “critical technology gaps that are really important to national security,” said Ernestine Fu Mak, co-founder of MVA and founder of Brave Capital, a venture capital firm.

Venture funding for U.S.-based defense tech startups totaled about $38 billion through the first half of 2025, and could exceed its 2021 peak if the pace remains constant for the rest of the year, according to JPMorgan.

‘The battlefield is changing’

As the global war landscape changed over the past decades, the U.S. Department of Defense has identified several technologies that are critical to national security, including hypersonics, energy resilience, space technology, integrated sensing and cyber.

“In a post-9/11 world, the entire Department of Defense effectively focused on … the global war on terrorism. It was our military versus insurgents, guerrillas, asymmetric warfare, relatively low-tech fighters in most cases,” said Darby.

But war today is more focused on “great power competition,” said Mak.

The battlefield is changing and new technologies are needed … warfare no longer being limited to land, sea, air. There’s also cyber and space domains that have become contested.

Ernestine Fu Mak

Co-founder, MilVet Angels

“The focus is more on deterring and competing with [adversaries] in these very high-tech, multi-domain conflicts,” Mak added. “The battlefield is changing and new technologies are needed… warfare no longer being limited to land, sea, air. There’s also cyber and space domains that have become contested.”

Today, some of these Silicon Valley “neoprimes” are developing not just weapons, but also dual-use technologies that can be applied both commercially and by militaries.

“So things like artificial intelligence and autonomy have broad, sweeping commercial applications, but they’re also clearly a force multiplier in a military context,” said Darby. “[The] Department of War is rapidly assessing and adopting these dual-use technologies … they’re sending signals to the investment world, to the defense industrial base, that the U.S. government needs these things.”

That direction from the government has, in turn, provided a clear and strategic roadmap for both investors and entrepreneurs, said Mak.

The ‘new guard’

On Sept. 17, MVA came out of stealth mode after quietly backing some leading defense tech startups since 2021.

Today, Mak says the syndicate’s roughly 250 members include tech founders, Wall Street financiers, company executives, intelligence officials, former military leaders and Navy SEALs. Together, they’ve invested in companies like Anduril Industries, Shield AI, Hermeus, Ursa Major and Aetherflux.

“Overall, we believe that ‘neoprimes’ cannot exist in the abstract. They require people — individuals who bring technical expertise, who carry a deep sense of mission, and who contribute complementary voices and talents. Together, this coalition forms what we are convening and calling the ‘new guard,'” said Mak.

She added that modern national security requires both the “warrior’s insight on the battlefield” and the “builder’s drive for innovation”.

“Working together with engaged, informed patriots whose participation strengthens our defense ecosystem and reinforces the very fabric of national security,” Mak said.

Mak and Darby both agree that as new technologies develop and make their way onto battlefields globally, it’s changing the way militaries fight, which can also pose new threats.

“You’re seeing these technologists, these builders … building defense tech, and the reason why they’re doing so, is not to initiate conflict, but rather to create a credible deterrent that discourages aggression,” said Mak.

“No one in defense tech is looking to wage war, rather, it’s looking to deter it and wanting adversaries to think twice before threatening peace and stability,” Mak added.

Continue Reading

Trending