OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz
Elizabeth Frantz | Reuters
At most tech CEO hearings in recent years, lawmakers have taken a contentious tone, grilling executives over their data-privacy practices, competitive methods and more.
But at Tuesday’s hearing on AI oversight including OpenAI CEO Sam Altman, lawmakers seemed notably more welcoming toward the ChatGPT maker. One senator even went as far as asking whether Altman would be qualified to administer rules regulating the industry.
Altman’s warm welcome on Capitol Hill, which included a dinner discussion the night prior with dozens of House lawmakers and a separate speaking event Tuesday afternoon attended by House Speaker Kevin McCarthy, R-Calif., has raised concerns from some AI experts who were not in attendance this week.
These experts caution that lawmakers’ decision to learn about the technology from a leading industry executive could unduly sway the solutions they seek to regulate AI. In conversations with CNBC in the days after Altman’s testimony, AI leaders urged Congress to engage with a diverse set of voices in the field to ensure a wide range of concerns are addressed, rather than focus on those that serve corporate interests.
OpenAI did not immediately respond to a request for comment on this story.
A friendly tone
For some experts, the tone of the hearing and Altman’s other engagements on the Hill raised alarm.
Lawmakers’ praise for Altman at times sounded almost like “celebrity worship,” according to Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at New York University.
“You don’t ask the hard questions to people you’re engaged in a fandom about,” she said.
“It doesn’t sound like the kind of hearing that’s oriented around accountability,” said Sarah Myers West, managing director of the AI Now Institute. “Saying, ‘Oh, you should be in charge of a new regulatory agency’ is not an accountability posture.”
West said the “laudatory” tone of some representatives following the dinner with Altman was surprising. She acknowledged it may “signal that they’re just trying to sort of wrap their heads around what this new market even is.”
But she added, “It’s not new. It’s been around for a long time.”
Safiya Umoja Noble, a professor at UCLA and author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” said lawmakers who attended the dinner with Altman seemed “deeply influenced to appreciate his product and what his company is doing. And that also doesn’t seem like a fair deliberation over the facts of what these technologies are.”
“Honestly, it’s disheartening to see Congress let these CEOs pave the way for carte blanche, whatever they want, the terms that are most favorable to them,” Noble said.
Real differences from the social media era?
At Tuesday’s Senate hearing, lawmakers made comparisons to the social media era, noting their surprise that industry executives showed up asking for regulation. But experts who spoke with CNBC said industry calls for regulation are nothing new and often serve an industry’s own interests.
“It’s really important to pay attention to specifics here and not let the supposed novelty of someone in tech saying the word ‘regulation’ without scoffing distract us from the very real stakes and what’s actually being proposed, the substance of those regulations,” said Whittaker.
“Facebook has been using that strategy for years,” Meredith Broussard, New York University professor and author of “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech,” said of the call for regulation. “Really, what they do is they say, ‘Oh, yeah, we’re definitely ready to be regulated.’… And then they lobby [for] exactly the opposite. They take advantage of the confusion.”
Experts cautioned that the kinds of regulation Altman suggested, like an agency to oversee AI, could actually stall regulation and entrench incumbents.
“That seems like a great way to completely slow down any progress on regulation,” said Margaret Mitchell, researcher and chief ethics scientist at AI company Hugging Face. “Government is already not resourced enough to well support the agencies and entities they already have.”
Ravit Dotan, who leads an AI ethics lab at the University of Pittsburgh as well as AI ethics at generative AI startup Bria.ai, said that while it makes sense for lawmakers to take Big Tech companies’ opinions into account since they are key stakeholders, they shouldn’t dominate the conversation.
“One of the concerns that is coming from smaller companies generally is whether regulation would be something that is so cumbersome that only the big companies are really able to deal with [it], and then smaller companies end up having a lot of burdens,” Dotan said.
Several researchers said the government should focus on enforcing the laws already on the books and applauded a recent joint agency statement that asserted the U.S. already has the power to enforce against discriminatory outcomes from the use of AI.
Dotan said there were bright spots in the hearing when she felt lawmakers were “informed” in their questions. But in other cases, she said she wished lawmakers had pressed Altman for deeper explanations or commitments.
For example, when asked about the likelihood that AI will displace jobs, Altman said that eventually it will create more quality jobs. While Dotan said she agreed with that assessment, she wished lawmakers had asked Altman for more potential solutions to help displaced workers find a living or gain skills training in the meantime, before new job opportunities become more widely available.
“There are so many things that a company with the power of OpenAI backed by Microsoft has when it comes to displacement,” Dotan said. “So to me, to leave it as, ‘Your market is going to sort itself out eventually,’ was very disappointing.”
Diversity of voices
A key message AI experts have for lawmakers and government officials is to include a wider array of voices, both in personal background and field of experience, when considering regulating the technology.
“I think that community organizations and researchers should be at the table; people who have been studying the harmful effects of a variety of different kinds of technologies should be at the table,” said Noble. “We should have policies and resources available for people who’ve been damaged and harmed by these technologies … There are a lot of great ideas for repair that come from people who’ve been harmed. And we really have yet to see meaningful engagement in those ways.”
Mitchell said she hopes Congress engages more specifically with people involved in auditing AI tools and experts in surveillance capitalism and human-computer interactions, among others. West suggested that people with expertise in fields that will be affected by AI should also be included, like labor and climate experts.
Whittaker pointed out that there may already be “more hopeful seeds of meaningful regulation outside of the federal government,” pointing to the Writers Guild of America strike as an example, in which demands include job protections from AI.
Government should also pay greater attention and offer more resources to researchers in fields like social sciences, who have played a large role in uncovering the ways technology can result in discrimination and bias, according to Noble.
“Many of the challenges around the impact of AI in society has come from humanists and social scientists,” she said. “And yet we see that the funding that is predicated upon our findings, quite frankly, is now being distributed back to computer science departments that work alongside industry.”
“Most of the women that I know who have been the leading voices around the harms of AI for the last 20 years are not invited to the White House, are not funded by [the National Science Foundation and] are not included in any kind of transformative support,” Noble said. “And yet our work does have and has had tremendous impact on shifting the conversations about the impact of these technologies on society.”
Noble pointed to the White House meeting earlier this month that included Altman and other tech CEOs, such as Google’s Sundar Pichai and Microsoft’s Satya Nadella. Noble said the photo of that meeting “really told the story of who has put themselves in charge. …The same people who’ve been the makers of the problems are now somehow in charge of the solutions.”
Bringing in independent researchers to engage with government would give those experts opportunities to make “important counterpoints” to corporate testimony, Noble said.
Still, other experts noted that they and their peers have engaged with government about AI, albeit without the same media attention Altman’s hearing received and perhaps without a large event like the dinner Altman attended with a wide turnout of lawmakers.
Mitchell worries lawmakers are now “primed” from their discussions with industry leaders.
“They made the decision to start these discussions, to ground these discussions in corporate interests,” Mitchell said. “They could have gone in a totally opposite direction and asked them last.”
Mitchell said she appreciated Altman’s comments on Section 230, the law that helps shield online platforms from being held responsible for their users’ speech. Altman conceded that outputs of generative AI tools would not necessarily be covered by the legal liability shield and a different framework is needed to assess liability for AI products.
“I think, ultimately, the U.S. government will go in a direction that favors large tech corporations,” Mitchell said. “My hope is that other people, or people like me, can at least minimize the damage, or show some of the devil in the details to lead away from some of the more problematic ideas.”
“There’s a whole chorus of people who have been warning about the problems, including bias along the lines of race and gender and disability, inside AI systems,” said Broussard. “And if the critical voices get elevated as much as the commercial voices, then I think we’re going to have a more robust dialogue.”
Honor launched the Honor Magic V5 on Wednesday July 2, as it looks to challenge Samsung in the foldable space.
Honor
Honor on Wednesday touted the slimness and battery capacity of its newly launched thin foldable phone, as it lays down a fresh challenge to market leader Samsung.
The Honor Magic V5 goes will initially go on sale in China, but the Chinese tech firm will likely bring the device to international markets later this year.
Honor said the Magic V5 is 8.8 mm to 9mm when folded, depending on the color choice. The phone’s predecessor, the Magic V3 — Honor skipped the Magic V4 name — was 9.2 mm when folded. Honor said the Magic V5 weighs 217 grams to 222 grams, again, depending on the color model. The previous version was 226 grams.
In China, Honor will launch a special 1 terabyte storage size version of the Magic V5, which it says will have a battery capacity of more than 6000 milliampere-hour — among the highest for foldable phones.
Honor has tried hard to tout these features, as competition in foldables ramps up, even as these types of devices have a very small share of the overall smartphone market.
Honor vs. Samsung
Foldables represented less than 2% of the overall smartphone market in 2024, according to International Data Corporation. Samsung was the biggest player with 34% market share followed by Huawei with just under 24%, IDC added. Honor took the fourth spot with a nearly 11% share.
Honor is looking to get a head start on Samsung, which has its own foldable launch next week on July 9.
Francisco Jeronimo, a vice president at the International Data Corporation, said the Magic V5 is a strong offering from Honor.
“This is the dream foldable smartphone that any user who is interested in this category will think of,” Jeronimo told CNBC, pointing to features such as the battery.
“This phone continues to push the bar forward, and it will challenge Samsung as they are about to launch their seventh generation of foldable phones,” he added.
At its event next week, Samsung is expected to release a foldable that is thinner than its predecessor and could come close to challenging Honor’s offering by way of size, analysts said. If that happens, then Honor will be facing more competition, especially against Samsung, which has a bigger global footprint.
“The biggest challenge for Honor is the brand equity and distribution reach vs Samsung, where the Korean vendor has the edge,” Neil Shah, co-founder of Counterpoint Research, told CNBC.
Honor’s push into international markets beyond China is still fairly young, with the company looking to build up its brand.
“Further, if Samsung catches up with a thinner form-factor in upcoming iterations, as it has been the real pioneer in foldables with its vertical integration expertise from displays to batteries, the differentiating factor might narrow for Honor,” Shah added.
Vertical integration refers to when a company owns several parts of a product’s supply chain. Samsung has a display and battery business which provides the components for its foldables.
In March, Honor pledged a $10 billion investment in AI over the next five years, with part of that going toward the development of next-generation agents that are seen as more advanced personal assistants.
Honor said its AI assistant Yoyo can interact with other AI models, such as those created by DeepSeek and Alibaba in China, to create presentation decks.
The company also flagged its AI agent can hail a taxi ride across multiple apps in China, automatically accepting the quickest ride to arrive? and cancelling the rest.
One of the most popular gaming YouTubers is named Bloo, and has bright blue wavy hair and dark blue eyes. But he isn’t a human — he’s a fully virtual personality powered by artificial intelligence.
“I’m here to keep my millions of viewers worldwide entertained and coming back for more,” said Bloo in an interview with CNBC. “I’m all about good vibes and engaging content. I’m built by humans, but boosted by AI.”
Bloo is a virtual YouTuber, or VTuber, who has built a massive following of 2.5 million subscribers and more than 700 million views through videos of him playing popular games like Grand Theft Auto, Roblox and Minecraft. VTubers first gained traction in Japan in the 2010s. Now, advances in AI are making it easier than ever to create VTubers, fueling a new wave of virtual creators on YouTube.
The virtual character – whose bright colors and 3D physique look like something out of a Pixar film or the video game Fortnite – was created by Jordi van den Bussche, a long time YouTuber also known as kwebbelkop. Van den Bussche created Bloo after finding himself unable to keep up with the demands of content creation. The work no longer matched the output.
“Turns out, the flaw in this equation is the human, so we need to somehow remove the human,” said van den Bussche, a 29-year old from Amsterdam, in an interview. “The only logical way was to replace the humanwith either a photorealistic person or a cartoon. The VTuber was the only option, and that’s where Bloo came from.”
Jordi Van Den Bussche, YouTuber known as Kwebbelkop.
Courtesy: Jordi Van Den Bussche
Bloo has already generated more than seven figures in revenue, according to van den Bussche. Many VTubers like Bloo are “puppeteered,” meaning a human controls the character’s voice and movements in real time using motion capture or face-tracking technology.Everything else, from video thumbnails to voice dubbing in other languages, is handled by AI technology from ElevenLabs, OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude. Van den Bussche’s long-term goal is for Bloo’s entire personality and content creation process to be run by AI.
Van den Bussche has already tested fully AI-generated videos on Bloo’s channel, but says the results have not yet been promising. The content doesn’t perform as well because the AI still lacks the intuition and creative instincts of a human, he said.
“When AI can do it better, faster or cheaper than humans, that’s when we’ll start using it permanently,” van den Bussche said.
The technology might not be far away.
Startup Hedra offers a product that uses AI technology to generate videos that are up to five minutes long. It raised $32 million in a funding round in Mayled by Andreessen Horowitz’s Infrastructure fund.
Hedra’s product, Character-3, allows users to create AI-generated characters for videos and can add dialogue and other characteristics. CEO Michael Lingelbach told CNBC Hedra is working on a product that will allow users to create self-sustaining, fully-automated characters.
Hedra’s product Character-3 allows users to make figures powered by AI that can be animated in real-time.
Hedra
“We’re doing a lot of research accelerating models like Character-3 to real time, and that’s going to be a really good fit for VTubers,” Lingelbach said.
Character-3’s technology is already being used by a growing number of creators who are experimenting with new formats, and many of their projects are going viral. One of those is comedian Jon Lajoie’s Talking Baby Podcast, which features a hyper-realistic animated baby talking into a microphone. Another is Milla Sofia, a virtual singer and artist whose AI-generated music videos attract thousands of views.
Talking Baby Podcast
Source: Instagram | Talking Baby Podcast
These creators are using Character-3 to produce content that stands out on social media, helping them reach wide audiences without the cost and complexity of traditional production.
AI-generated video is a rapidly evolving technology that is reshaping how content is made and shared online, making it easier than ever to produce high-quality video without cameras, actors or editing software. In May, Google announced Veo 3, a tool that creates AI-generated videos with audio.
Google said it uses a subset of YouTube content to train Veo 3, CNBC reported in June. While many creators said they were unaware of the training, experts said it has the potential to create an intellectual property crisis on the platform.
Faceless AI YouTubers
Creators are increasingly finding profitable ways to capitalize on the generative AI technology ushered in by the launch of OpenAI’s ChatGPT in late 2022.
One growing trend is the rise of faceless AI channels. These are run by creators who use these tools to produce videos with artificially generated images and voiceover that can sometimes earn thousands of dollars a month without them ever appearing on camera.
“My goal is to scale up to 50 channels, though it’s getting harder because of how YouTube handles new channels and trust scores,” said GoldenHand, a Spain-based creator who declined to share his real name.
Working with a small team, GoldenHand said he publishes up to 80 videos per day across his network of channels. Some maintain a steady few thousand views per video while others might suddenly go viral and rack up millions of views, mostly to an audience of those over the age of 65.
GoldenHand said his content is audio-driven storytelling. He describes his YouTube videos as audiobooks that are paired with AI-generated images and subtitles. Everything after the initial idea is created entirely by AI.
He recently launched a new platform, TubeChef, which gives creators access to his system to automatically generate faceless AI videos starting at $18 a month.
“People think using AI means you’re less creative, but I feel more creative than ever,” he said. “Coming up with 60 to 80 viral video ideas a day is no joke. The ideation is where all the effort goes now.”
AI Slop
As AI-generated content becomes more common online, concerns about its impact are growing. Some users worry about the spread of misinformation, especially as it becomes easier to generate convincing but entirely AI-fabricated videos.
“Even if the content is informative and someone might find it entertaining or useful, I feel we are moving into a time where … you do not have a way to understand what is human made and what is not,” said Henry Ajder, founder of Latent Space Advisory, which helps business navigate the AI landscape.
Others are frustrated by the sheer volume of low-effort, AI content flooding their feeds. This kind of material is often referred to as “AI slop,” low-quality, randomly generated content made using artificial intelligence.
Google DeepMind Veo 3.
Courtesy: Google DeepMind
“The age of slop is inevitable,” said Ajder, who is also an AI policy advisor at Meta, which owns Facebook and Instagram. “I’m not sure what we do about it.”
While it’s not new, the surge in this type of content has led to growing criticism from users who say it’s harder to find meaningful or original material, particularly on apps like TikTok, YouTube and Instagram.
“I am actually so tired of AI slop,” said one user on X. “AI images are everywhere now. There is no creativity and no effort in anything relating to art, video, or writing when using AI. It’s disappointing.”
However, the creators of this AI content tell CNBC that it comes down to supply and demand. As the AI-generated content continues to get clicks, there’s no reason to stop creating more of it, said Noah Morris, a creator with 18 faceless YouTube channels.
Some argue that AI videos still have inherent artistic value, and though it’s become much easier to create, slop-like content has always existed on the internet, Lingelbach said.
“There’s never been a barrier to people making uninteresting content,” he said. “Now there’s just more opportunity to create different kinds of uninteresting content, but also more kinds of really interesting content too.”
The X logo appears on a phone, and the xAI logo is displayed on a laptop in Krakow, Poland, on April 1, 2025. (Photo by Klaudia Radecka/NurPhoto via Getty Images)
Nurphoto | Nurphoto | Getty Images
Elon Musk‘s social media platform X was hit with an outage on Wednesday, leaving some users unable to load the site.
More than 15,000 users reported issues with the platform at around 9:53 a.m. ET, according to analytics firm Downdetector, which gathers data from users who spot glitches and report them to service.
The issues appeared to be largely resolved by 10:30 a.m., though some users continue to report disruptions with the platform.
The site has suffered from multiple disruptions in recent months.
Representatives from X didn’t immediately respond to a request for comment on the outage.