People wait in line outside the US Supreme Court in Washington, DC, on February 21, 2023 to hear oral arguments in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.
Jim Watson | AFP | Getty Images
Supreme Court Justices voiced hesitation on Tuesday about upending a key legal shield that protects tech companies from liability for their users’ posts, and for how the companies moderate messages on their sites.
Justices across the ideological spectrum expressed concern with breaking the delicate balance set by Section 230 of the Communications Decency Act as they rule on the pivotal case, Gonzalez v. Google, even as some suggested a narrower reading of the liability shield could sometimes make sense.
The current case was brought by the family of an American killed in a 2015 terrorist attack in Paris. The petitioners argue that Google, through its subsidiary YouTube, violated the Anti-Terrorism Act by aiding and abetting ISIS, as it promoted the group’s videos through its recommendation algorithm. Lower courts sided with Google, saying Section 230 protects the company from being held liable for third-party content posted on its service.
The petitioners contend that YouTube’s recommendations actually constitute the company’s own speech, which would fall outside the bounds of the liability shield.
But the justices struggled to understand where the petitioner’s counsel, Eric Schnapper, was drawing the line on what counts as content created by YouTube itself.
Conservative Justice Samuel Alito at one point said he was “completely confused” by the distinction Schnapper tried to draw between YouTube’s own speech and that of a third party.
Schnapper repeatedly pointed to the thumbnail image YouTube shows users to display what video is coming up next, or is suggested based on their views. He said that thumbnail was a joint creation between YouTube and the third party that posted the video, in this case ISIS, because YouTube contributes the URL.
But several justices questioned whether that argument would apply to any attempt to organize information from the internet, including a search engine results page. They expressed concern that such a broad interpretation could have far-reaching effects the high court may not be prepared to predict.
Conservative Justice Brett Kavanaugh noted that courts have applied Section 230 consistently since its inception in the 1990s and pointed to the amici briefs that warned overhauling that interpretation would cause massive economic consequences for many businesses, as well as their workers, consumers and investors. Kavanaugh said those are “serious concerns” Congress could consider if it sought to rework the statute. But the Supreme Court, he said, is “not equipped to account for that.”
“You’re asking us right now to make a very precise predictive judgment that ‘Don’t worry, that it’s really not going to be that bad,'” Kavanaugh told U.S. Deputy Solicitor General Malcolm Stewart, who was arguing the high court should send the case back to the lower court for further consideration. “I don’t know that that’s at all the case. And I don’t know how we can assess that in any meaningful way.”
When Stewart suggested that Congress could amend 230 to account for changes in the reality of the internet today, Chief Justice John Roberts pushed back, noting “the amici suggests that if we wait for Congress to make that choice, the internet will be sunk.”
Even conservative Justice Clarence Thomas, who has openly written that the court should take up a case around Section 230, seemed skeptical of the petitioners’ line in the sand. Thomas noted that YouTube uses the same algorithm to recommend ISIS videos to users interested in that kind of content, as it uses to promote cooking videos to those interested in that subject. Plus, he said, he sees those as suggestions, not affirmative recommendations.
“I don’t understand how a neutral suggestion about something that you’ve expressed an interest in is aiding and abetting,” Thomas said.
The justices had tough questions for Google too, wondering if the liability protections are quite as broad as the tech industry would like to believe. Liberal Justice Ketanji Brown Jackson, for example, had a long back and forth with Lisa Blatt, counsel arguing on behalf of Google, about whether YouTube would be protected by Section 230 in the hypothetical scenario in which the company promotes an ISIS video on its homepage in a box marked “featured.”
Blatt said publishing a homepage is inherent to operating a website so should be covered by Section 230, and that organization is a core function of platforms, so if topic headings can’t be covered, the statute basically becomes a “dead letter.”
Liberal Justice Elena Kagan suggested it’s not necessary to agree completely with Google’s assessment of the fallout from altering 230 to fear the potential consequences.
“I don’t have to accept all of Ms. Blatt’s ‘the sky is falling’ stuff to accept something about, ‘Boy, there’s a lot of uncertainty about going the way you would have us go,’ in part just because of the difficulty of drawing lines in this area,” Kagan told Schnapper, adding the job may be better suited for Congress.
“We’re a court, we really don’t know about these things,” Kagan said. “These are not like the nine greatest experts on the internet.”
Section 230 proponents are optimistic
Several experts rooting for Google’s success in this case said they were more optimistic after the arguments than before at a press conference convened by Chamber of Progress, a center-left industry group that Google and other major tech platforms support.
Cathy Gellis is an independent attorney in the San Francisco Bay Area who filed an amicus brief on behalf of a person running a Mastodon server, as well as a Google-funded startup advocacy group and a digital think tank. She told CNBC that briefs like hers and others seemed to have a big impact on the court.
“It would appear that if nothing else, amicus counsel, not just myself, but my other colleagues, may have saved the day because it was evident that the justices took a lot of those lessons on board,” Gellis said.
“And it appeared overall that there was not a huge appetite to upend the internet, especially on a case that I believe for them looked rather weak from a plaintiff’s point of view.”
Still, Eric Goldman, a professor at Santa Clara University School of Law, said while he felt more optimistic on the outcome of the Gonzalez case, he remains concerned for the future of Section 230.
“I remain petrified that the opinion is going to put all of us in an unexpected circumstance,” Goldman said.
On Wednesday, the justices will hear a similar case with a different legal question.
In Twitter v. Taamneh, the justices will similarly consider whether Twitter can be held liable for aiding and abetting under the Anti-Terrorism Act. But in this case, the focus is on whether Twitter’s decision to regularly remove terrorist posts means it had knowledge of such messages on its platform and should have taken more aggressive action against them.
Conservative Justice Amy Coney Barrett asked Schnapper how the decision in that case could impact the one in the Google matter. Schnapper said if the court ruled against Taamneh, the Gonzalez counsel should be given the chance to amend their arguments in a way that fits the standard set in the other case.
Honor launched the Honor Magic V5 on Wednesday July 2, as it looks to challenge Samsung in the foldable space.
Honor
Honor on Wednesday touted the slimness and battery capacity of its newly launched thin foldable phone, as it lays down a fresh challenge to market leader Samsung.
The Honor Magic V5 goes will initially go on sale in China, but the Chinese tech firm will likely bring the device to international markets later this year.
Honor said the Magic V5 is 8.8 mm to 9mm when folded, depending on the color choice. The phone’s predecessor, the Magic V3 — Honor skipped the Magic V4 name — was 9.2 mm when folded. Honor said the Magic V5 weighs 217 grams to 222 grams, again, depending on the color model. The previous version was 226 grams.
In China, Honor will launch a special 1 terabyte storage size version of the Magic V5, which it says will have a battery capacity of more than 6000 milliampere-hour — among the highest for foldable phones.
Honor has tried hard to tout these features, as competition in foldables ramps up, even as these types of devices have a very small share of the overall smartphone market.
Honor vs. Samsung
Foldables represented less than 2% of the overall smartphone market in 2024, according to International Data Corporation. Samsung was the biggest player with 34% market share followed by Huawei with just under 24%, IDC added. Honor took the fourth spot with a nearly 11% share.
Honor is looking to get a head start on Samsung, which has its own foldable launch next week on July 9.
Francisco Jeronimo, a vice president at the International Data Corporation, said the Magic V5 is a strong offering from Honor.
“This is the dream foldable smartphone that any user who is interested in this category will think of,” Jeronimo told CNBC, pointing to features such as the battery.
“This phone continues to push the bar forward, and it will challenge Samsung as they are about to launch their seventh generation of foldable phones,” he added.
At its event next week, Samsung is expected to release a foldable that is thinner than its predecessor and could come close to challenging Honor’s offering by way of size, analysts said. If that happens, then Honor will be facing more competition, especially against Samsung, which has a bigger global footprint.
“The biggest challenge for Honor is the brand equity and distribution reach vs Samsung, where the Korean vendor has the edge,” Neil Shah, co-founder of Counterpoint Research, told CNBC.
Honor’s push into international markets beyond China is still fairly young, with the company looking to build up its brand.
“Further, if Samsung catches up with a thinner form-factor in upcoming iterations, as it has been the real pioneer in foldables with its vertical integration expertise from displays to batteries, the differentiating factor might narrow for Honor,” Shah added.
Vertical integration refers to when a company owns several parts of a product’s supply chain. Samsung has a display and battery business which provides the components for its foldables.
In March, Honor pledged a $10 billion investment in AI over the next five years, with part of that going toward the development of next-generation agents that are seen as more advanced personal assistants.
Honor said its AI assistant Yoyo can interact with other AI models, such as those created by DeepSeek and Alibaba in China, to create presentation decks.
The company also flagged its AI agent can hail a taxi ride across multiple apps in China, automatically accepting the quickest ride to arrive? and cancelling the rest.
One of the most popular gaming YouTubers is named Bloo, and has bright blue wavy hair and dark blue eyes. But he isn’t a human — he’s a fully virtual personality powered by artificial intelligence.
“I’m here to keep my millions of viewers worldwide entertained and coming back for more,” said Bloo in an interview with CNBC. “I’m all about good vibes and engaging content. I’m built by humans, but boosted by AI.”
Bloo is a virtual YouTuber, or VTuber, who has built a massive following of 2.5 million subscribers and more than 700 million views through videos of him playing popular games like Grand Theft Auto, Roblox and Minecraft. VTubers first gained traction in Japan in the 2010s. Now, advances in AI are making it easier than ever to create VTubers, fueling a new wave of virtual creators on YouTube.
The virtual character – whose bright colors and 3D physique look like something out of a Pixar film or the video game Fortnite – was created by Jordi van den Bussche, a long time YouTuber also known as kwebbelkop. Van den Bussche created Bloo after finding himself unable to keep up with the demands of content creation. The work no longer matched the output.
“Turns out, the flaw in this equation is the human, so we need to somehow remove the human,” said van den Bussche, a 29-year old from Amsterdam, in an interview. “The only logical way was to replace the humanwith either a photorealistic person or a cartoon. The VTuber was the only option, and that’s where Bloo came from.”
Jordi Van Den Bussche, YouTuber known as Kwebbelkop.
Courtesy: Jordi Van Den Bussche
Bloo has already generated more than seven figures in revenue, according to van den Bussche. Many VTubers like Bloo are “puppeteered,” meaning a human controls the character’s voice and movements in real time using motion capture or face-tracking technology.Everything else, from video thumbnails to voice dubbing in other languages, is handled by AI technology from ElevenLabs, OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude. Van den Bussche’s long-term goal is for Bloo’s entire personality and content creation process to be run by AI.
Van den Bussche has already tested fully AI-generated videos on Bloo’s channel, but says the results have not yet been promising. The content doesn’t perform as well because the AI still lacks the intuition and creative instincts of a human, he said.
“When AI can do it better, faster or cheaper than humans, that’s when we’ll start using it permanently,” van den Bussche said.
The technology might not be far away.
Startup Hedra offers a product that uses AI technology to generate videos that are up to five minutes long. It raised $32 million in a funding round in Mayled by Andreessen Horowitz’s Infrastructure fund.
Hedra’s product, Character-3, allows users to create AI-generated characters for videos and can add dialogue and other characteristics. CEO Michael Lingelbach told CNBC Hedra is working on a product that will allow users to create self-sustaining, fully-automated characters.
Hedra’s product Character-3 allows users to make figures powered by AI that can be animated in real-time.
Hedra
“We’re doing a lot of research accelerating models like Character-3 to real time, and that’s going to be a really good fit for VTubers,” Lingelbach said.
Character-3’s technology is already being used by a growing number of creators who are experimenting with new formats, and many of their projects are going viral. One of those is comedian Jon Lajoie’s Talking Baby Podcast, which features a hyper-realistic animated baby talking into a microphone. Another is Milla Sofia, a virtual singer and artist whose AI-generated music videos attract thousands of views.
Talking Baby Podcast
Source: Instagram | Talking Baby Podcast
These creators are using Character-3 to produce content that stands out on social media, helping them reach wide audiences without the cost and complexity of traditional production.
AI-generated video is a rapidly evolving technology that is reshaping how content is made and shared online, making it easier than ever to produce high-quality video without cameras, actors or editing software. In May, Google announced Veo 3, a tool that creates AI-generated videos with audio.
Google said it uses a subset of YouTube content to train Veo 3, CNBC reported in June. While many creators said they were unaware of the training, experts said it has the potential to create an intellectual property crisis on the platform.
Faceless AI YouTubers
Creators are increasingly finding profitable ways to capitalize on the generative AI technology ushered in by the launch of OpenAI’s ChatGPT in late 2022.
One growing trend is the rise of faceless AI channels. These are run by creators who use these tools to produce videos with artificially generated images and voiceover that can sometimes earn thousands of dollars a month without them ever appearing on camera.
“My goal is to scale up to 50 channels, though it’s getting harder because of how YouTube handles new channels and trust scores,” said GoldenHand, a Spain-based creator who declined to share his real name.
Working with a small team, GoldenHand said he publishes up to 80 videos per day across his network of channels. Some maintain a steady few thousand views per video while others might suddenly go viral and rack up millions of views, mostly to an audience of those over the age of 65.
GoldenHand said his content is audio-driven storytelling. He describes his YouTube videos as audiobooks that are paired with AI-generated images and subtitles. Everything after the initial idea is created entirely by AI.
He recently launched a new platform, TubeChef, which gives creators access to his system to automatically generate faceless AI videos starting at $18 a month.
“People think using AI means you’re less creative, but I feel more creative than ever,” he said. “Coming up with 60 to 80 viral video ideas a day is no joke. The ideation is where all the effort goes now.”
AI Slop
As AI-generated content becomes more common online, concerns about its impact are growing. Some users worry about the spread of misinformation, especially as it becomes easier to generate convincing but entirely AI-fabricated videos.
“Even if the content is informative and someone might find it entertaining or useful, I feel we are moving into a time where … you do not have a way to understand what is human made and what is not,” said Henry Ajder, founder of Latent Space Advisory, which helps business navigate the AI landscape.
Others are frustrated by the sheer volume of low-effort, AI content flooding their feeds. This kind of material is often referred to as “AI slop,” low-quality, randomly generated content made using artificial intelligence.
Google DeepMind Veo 3.
Courtesy: Google DeepMind
“The age of slop is inevitable,” said Ajder, who is also an AI policy advisor at Meta, which owns Facebook and Instagram. “I’m not sure what we do about it.”
While it’s not new, the surge in this type of content has led to growing criticism from users who say it’s harder to find meaningful or original material, particularly on apps like TikTok, YouTube and Instagram.
“I am actually so tired of AI slop,” said one user on X. “AI images are everywhere now. There is no creativity and no effort in anything relating to art, video, or writing when using AI. It’s disappointing.”
However, the creators of this AI content tell CNBC that it comes down to supply and demand. As the AI-generated content continues to get clicks, there’s no reason to stop creating more of it, said Noah Morris, a creator with 18 faceless YouTube channels.
Some argue that AI videos still have inherent artistic value, and though it’s become much easier to create, slop-like content has always existed on the internet, Lingelbach said.
“There’s never been a barrier to people making uninteresting content,” he said. “Now there’s just more opportunity to create different kinds of uninteresting content, but also more kinds of really interesting content too.”
The X logo appears on a phone, and the xAI logo is displayed on a laptop in Krakow, Poland, on April 1, 2025. (Photo by Klaudia Radecka/NurPhoto via Getty Images)
Nurphoto | Nurphoto | Getty Images
Elon Musk‘s social media platform X was hit with an outage on Wednesday, leaving some users unable to load the site.
More than 15,000 users reported issues with the platform at around 9:53 a.m. ET, according to analytics firm Downdetector, which gathers data from users who spot glitches and report them to service.
The issues appeared to be largely resolved by 10:30 a.m., though some users continue to report disruptions with the platform.
The site has suffered from multiple disruptions in recent months.
Representatives from X didn’t immediately respond to a request for comment on the outage.