A legal test that Google’s lawyer told the Supreme Court was roughly “96% correct” could drastically undermine the liability shield that the company and other tech platforms have relied on for decades, according to several experts who advocate for upholding the law to the highest degree.
The so-called Henderson test would significantly weaken the power of Section 230 of the Communications Decency Act, several experts said in conversations and briefings following oral arguments in the case Gonzalez v. Google. Some of those who criticized Google’s concession even work for groups backed by the company.
Section 230 is the statute that protects tech platforms’ ability to host material from users — like social media posts, uploaded video and audio files, and comments — without being held legally liable for their content. It also allows platforms to moderate their services and remove posts they consider objectionable.
The law is central to the question that will be decided by the Supreme Court in the Gonzalez case, which asks whether platforms like Google’s YouTube can be held responsible for algorithmically recommending user posts that seem to endorse or promote terrorism.
In arguments on Tuesday, the justices seemed hesitant to issue a ruling that would overhaul Section 230.
But even if they avoid commenting on that law, they could still issue caveats that change the way it’s enforced, or clear a path for changing the law in the future.
What is the Henderson test?
One way the Supreme Court could undercut Section 230 is by endorsing the Henderson test, some advocates believe. Ironically, Google’s own lawyers may have given the court more confidence to endorse this test, if it chooses to do so.
The Henderson test came about from a November ruling by the Fourth Circuit appeals court in Henderson v. The Source for Public Data. The plaintiffs in that case sued a group of companies that collect public information about individuals, like criminal records, voting records and driving information, then put it in a database that they sell to third parties. The plaintiffs alleged that the companies violated the Fair Credit Reporting Act by failing to maintain accurate information, and by providing inaccurate information to a potential employer.
A lower court ruled that Section 230 barred the claims, but the appeals court overturned that decision.
The appeals court wrote that for Section 230 protection to apply, “we require that liability attach to the defendant on account of some improper content within their publication.”
In this case, it wasn’t the content itself that was at fault, but how the company chose to present it.
The court also ruled Public Data was responsible for the content because it decided how to present it, even though the information was pulled from other sources. The court said it’s plausible that some of the information Public Data sent to one of the plaintiff’s potential employers was “inaccurate because it omitted or summarized information in a way that made it misleading.” In other words, once Public Data made changes to the information it pulled, it became an information content provider.
Should the Supreme Court endorse the Henderson ruling, it would effectively “moot Section 230,” said Jess Miers, legal advocacy counsel for the Chamber of Progress, a center-left industry group that counts Google among its backers. Miers said this is because Section 230’s primary advantage is to help quickly dismiss cases against platforms that center on user posts.
“It’s a really dangerous test because, again, it encourages plaintiffs to then just plead their claims in ways that say, well, we’re not talking about how improper the content is at issue,” Miers said. “We’re talking about the way in which the service put that content together or compiled that content.”
Eric Goldman, a professor at Santa Clara University School of Law, wrote on his blog that Henderson would be a “disastrous ruling if adopted by SCOTUS.”
“It was shocking to me to see Google endorse a Henderson opinion because it’s a dramatic narrowing of Section 230,” Goldman said at a virtual press conference hosted by the Chamber of Progress after the arguments. “And to the extent that the Supreme Court takes that bait and says, ‘Henderson’s good to Google, it’s good to us,’ we will actually see a dramatic narrowing of Section 230 where plaintiffs will find lots of other opportunities to bring cases that are based on third-party content. They’ll just say that they’re based on something other than the harm that was in the third-party content itself.”
Google pointed to the parts of its brief in the Gonzalez case that discuss the Henderson test. In the brief, Google attempts to distinguish the actions of a search engine, social media site, or chat room that displays snippets of third-party information from those of a credit-reporting website, like those at issue in Henderson.
In the case of a chatroom, Google says, although the “operator supplies the organization and layout, the underlying posts are still third-party content,” meaning it would be covered by Section 230.
“By contrast, where a credit-reporting website fails to provide users with its own required statement of consumer rights, Section 230(c)(1) does not bar liability,” Google wrote. “Even if the website also publishes third-party content, the failure to summarize consumer rights and provide that information to customers is the website’s act alone.”
Google also said 230 would not apply to a website that “requires users to convey allegedly illegal preferences,” like those that would violate housing law. That’s because by “‘materially contributing to [the content’s] unlawfulness,’ the website makes that content its own and bears responsibility for it,” Google said, citing the 2008 Fair Housing Council of San Fernando Valley v. Roommates.com case.
Concerns over Google’s concession
Section 230 experts digesting the Supreme Court arguments were perplexed by Google’s lawyer’s decision to give such a full-throated endorsement of Henderson. In trying to make sense of it, several suggested it might have been a strategic decision to try to show the justices that Section 230 is not a boundless free pass for tech platforms.
But in doing so, many also felt Google went too far.
Cathy Gellis, who represented amici in a brief submitted in the case, said at the Chamber of Progress briefing that Google’s lawyer was likely looking to illustrate the line of where Section 230 does and does not apply, but “by endorsing it as broadly, it endorsed probably more than we bargained for, and certainly more than necessarily amici would have signed on for.”
Corbin Barthold, internet policy counsel at Google-backed TechFreedom, said in a separate press conference that the idea Google may have been trying to convey in supporting Henderson wasn’t necessarily bad on its own. He said they seemed to try to make the argument that even if you use a definition of publication like Henderson lays out, organizing information is inherent to what platforms do because “there’s no such thing as just like brute conveyance of information.”
But in making that argument, Barthold said, Google’s lawyer “kind of threw a hostage to fortune.”
“Because if the court then doesn’t buy the argument that Google made that there’s actually no distinction to be had here, it could go off in kind of a bad direction,” he added.
Miers speculated that Google might have seen the Henderson case as a relatively safe one to cite, given that it involves an alleged violation of the Fair Credit Reporting Act, rather than a question of a user’s social media post.
“Perhaps Google’s lawyers were looking for a way to show the court that there are limits to Section 230 immunity,” Miers said. “But I think in doing so, that invites some pretty problematic reading readings into the Section 230 immunity test, which can have pretty irreparable results for future internet law litigation.”
iPhone Air is the big newcomer among Apple‘s latest lineup that went on sale Friday, but inside the slim phone’s raised plateau is another new piece of hardware that signals a renewed focus on artificial intelligence.
Apple’s custom A19 Pro chip introduces a major architecture change, with neural accelerators added to each GPU core to increase compute power. Apple also debuted its first ever wireless chip for iPhone, the N1, and a second generation of its iPhone modem, the C1X. It’s a move analysts say gives Apple control of all the core chips in its phones.
“That’s where the magic is. When we have control, we are able to do things beyond what we can do by buying a merchant silicon part,” said Tim Millet, Apple vice president of platform architecture. He sat down with CNBC at Apple Park in September for the first U.S. interview about the new chips.
Until now, Broadcom was the main provider of wireless and bluetooth chips for iPhones, although Apple has made networking chips for the AirPods and Apple Watch for nearly a decade. Apple’s N1 is in the entire iPhone 17 lineup and the iPhone Air.
Arun Mathias, Apple vice president of wireless software technologies and ecosystems, gave CNBC an example of the N1’s improved Wi-Fi functionality.
“One of the things people may not realize is that your Wi-Fi access points actually contribute to your device’s awareness of location, so you don’t need to use GPS, which actually costs more from a power perspective,” Mathias said. “By being able to do this more seamlessly in the background, not needing to wake up the application processor as much, we can do that significantly more efficiently.”
Apple’s new custom SoC for iPhone, A19 Pro, has neural accelerators added to the GPU cores to prioritize AI workloads
Qualcomm modems remain in the iPhone 17, 17 Pro and 17 Pro Max, but Apple’s C1X is in the iPhone Air.
“It may not be as good as Qualcomm’s yet, in terms of just overall throughput and performance, but they can control it and they can make it run at lower power. So you’re going to get better battery life,” said Ben Bajarin, CEO of Creative Strategies, a technology research and consulting firm. He expects Apple to “completely phase out” Qualcomm in the “next couple of years.”
Apple’s Mathias said the C1X is “up to twice as fast” as the C1 and “uses 30% less energy” than the Qualcomm modem in the iPhone 16 Pro.
Neither Qualcomm or Broadcom saw much market impact following Apple’s announcement, and both companies will maintain licensing deals with Apple for certain core technologies.
“They probably won’t ever have their own Apple model like Google or OpenAI,” Bajarin said. “They’re still going to run those services on iPhone, right? They want the iPhone to be the best place for developers to run their AI.”
Apple has been making its own system on a chip, or SoC, since the A series launched with the iPhone 4 in 2010. The latest generation A19 Pro has a new chip architecture that prioritizes AI workloads, adding neural accelerators to the GPU cores.
“We are building the best on-device AI capability that anyone else has,” Millet told CNBC. “Right now we are focused on making sure that these phones that we’re shipping today, or shipping soon, will be capable of all the important on-device AI workloads that are coming.”
Privacy is a major reason Apple is prioritizing on-device AI, but Millet said there’s another reason, too.
“It is efficient for us. It is responsive. We know that we are much more in control over the experience,” he said.
One “built-in AI” feature Millet highlighted is the new front camera that uses AI to detect a new face and automatically switches to taking a horizontal photo. “It’s leveraging a full complement of almost all the capabilities in the A19 Pro,” Millet said.
Apple’s original AI hardware, its Neural Engine, was first unveiled back in 2017. It was barely mentioned at the launch. Instead, it’s all about adding compute power to the GPUs.
“The integration of the neural processing is reaching MacBook Pro class performance inside an iPhone,” Millet said. “It’s a big, big step forward in ML compute. And so when you look inside the Neural Engine, for example, you have a lot of dense matrix math. We didn’t have that capability in our GPU. But now we do with A19 Pro.”
Bajarin told CNBC that Apple’s neural accelerators may work similarly to the tensor cores on Nvidia‘s AI chips, such as the H100.
“We’re integrating neural processing in a way that allows someone who’s writing a program to one of those small processors, extending the instruction set so they have a new class of computer that they have access to right there, and they can switch back and forth between 3D-rendering instructions and neural-processing instructions, all seamlessly inside the same microprogram,” Millet said.
Apple’s previous generation A19 SoC is in the base model iPhone 17, while the A19 Pro is in the iPhone Air, iPhone 17 and 17 Pro Max.
Apple’s iPhone 17 Pro shown on September 9, 2025 at Apple Park in California has enhanced 3D-rendering capabilities powered by Apple’s custom chip, A19 Pro, with neural accelerators added to the 6 GPU cores.
Katie Tarasov
Following overheating issues in the iPhone 15, a new “vapor chamber” in the Pro models keeps the custom chips cool.
“It’s actually positioned in concert with where the system on a chip, the A19 Pro is positioned,” said Kaiann Drance, Apple’s vice president of worldwide iPhone product marketing. “We think about how that all goes together, including with that forged unibody aluminum design, which is incredibly thermally conductive so that we can effectively dissipate heat with the vapor chamber, with where it’s positioned with our chip. And it’s even laser welded into it, which creates a metallic bond which also helps dissipate heat.”
More chips, more U.S. manufacturing
Apple still relies on others for smaller components, like Samsung for memory and Texas Instruments for analog chips. All bigger core chips, however, may be Apple-designed in every iPhone as soon as next year, according to Bajarin.
“We expect that there would be modems coming to Mac. We would expect there’s modems coming to iPad. There’s probably N variants of the networking chip coming to Mac,” Bajarin said. “I think over the course of the next few years, it will be on all of the portfolio.”
When CNBC asked Apple’s Millet if neural accelerators will be in the GPU cores of M5, the next anticipated SoC for Mac, he said, “We have a unified approach to architecture.”
The iPhone maker plans to manufacture at least some of its custom chips in the U.S., at facilities like Taiwan Semiconductor Manufacturing Company‘s new campus in Arizona, where CNBC got a tour of the first completed fab.
Apple’s A19 Pro is made at the leading edge of TSMC’s 3-nanometer node. While TSMC is workingtoward 3nm production in Arizona by 2028, it’s not there yet.
“If you need to be on the leading edge, it’s going to be Taiwan for the time being,” Bajarin said.
In August, Trump announced a 100% tariff on chips from companies not making domestically. That same day, Apple increased its U.S. spending commitment to $600 billion over the next four years. CEO Tim Cook said part of that will go toward creating an “end-to-end silicon supply chain right here in America.”
“There’s really a question of what part of tariffs impact the silicon supply chain,” Bajarin said. “This is obviously why Apple and Tim Cook are on their mission and out there talking about investing in America.”
As part of that plan, Bajain said Apple could give struggling U.S. chipmaker Intel “serious consideration if 14A really does deliver on all of its promises.” Although, he added, it’s “going to be awhile” before Intel “becomes a viable option.”
For now, Apple is committed to making chips at TSMC Arizona.
“We are super excited about TSMC’s push into U.S. manufacturing. Obviously it will help us from a time zone perspective, and we also appreciate that the diversity of the supply is also really important,” Millet said.
When asked if he knows how much of Apple’s $600 billion U.S. spend will go toward custom silicon, Millet said, “I hope it’s a lot.”
Watch the video to see a behind-the-scenes look at Apple’s latest custom silicon.
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., wears a pair of Meta Ray-Ban Display AI glasses during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.
David Paul Morris | Bloomberg | Getty Images
When it comes to the new $799 Meta Ray-Ban Display glasses, it’s the device’s accompanying fuzzy, gray wristband that truly dazzles.
I was able to try out Meta’s next-generation smart glasses that the social media company announced Wednesday at its annual Connect event. These are the first glasses that Meta sells to consumers with a built-in display, marking an important step for the company as it works toward CEO Mark Zuckerberg’s vision of having headsets and glasses overtake smartphones as people’s preferred form of computing.
The display on the new glasses, though, is still quite simplistic. Last year at Connect, Meta unveiled its Orion glasses, which are a prototype capable of overlaying complex 3D visuals onto the physical world. Those glasses were thick, required a computing puck and were built for demo purposes only.
The Meta Ray-Ban Display, however, is going on sale to the public, starting in the U.S. on Sept. 30.
Though the new glasses include just a small digital display in their right lens, that screen enables unique visual functions, like reading messages, seeing photo previews and reading live captions while having a conversation with someone.
Controlling the device requires putting on its EMG sensor wristband that detects the electrical signals generated by a person’s body so they can control the glasses via hand gestures. Putting it on was just like strapping on a watch, except for the small, electric jolt I felt when it activated. It wasn’t as much of a shock as you feel taking clothes out of the dryer, but it was noticeable.
Donning the new glasses was less shocking, until I had them on and saw the little display emerge, just below my right cheek. The display is like a miniaturized smartphone screen but translucent so as to not obscure real-world objects.
Despite being a high-resolution display, the icons weren’t always clear when contrasted with my real-world field of view, causing the letters to appear a bit murky. These visuals aren’t meant to wrap around your head in crystal-clear fidelity, but are there for you to perform simple actions, like activating the glasses’ camera and glancing at the songs on Spotify. It’s more utility than entertainment.
The Meta Ray-Ban Display AI glasses with the Meta Neural Band wristband at Meta headquarters in Menlo Park, California, US, on Tuesday, Sept. 16, 2025.
David Paul Morris | Bloomberg | Getty Images
I had the most fun trying to perform hand gestures to navigate the display and open apps. By clenching my fist and swiping my thumb on the surface of my pointer finger, I was able to scroll through the apps like I was using a touchpad.
It took me several attempts at first to open the camera app through pinching my index finger and thumb together, and when the app wouldn’t activate I would find myself pinching twice, mimicking the double clicking of a mouse on a computer. But whereas using a mouse is second nature to me, I learned I have subpar pinching skills that lack the correct cadence and timing required to consistently open the app.
It was a bit strange and amusing to see people in front of me while I continuously pinched my fingers to interact with the screen. I felt like I was reenacting an infamous comedy scene from the TV show “The Kids in The Hall” in which a misanthrope watches people from afar while pinching his fingers and saying, “I’m crushing your head, I’m crushing your head!”
With the camera app finally opened, the display showed what I was looking at in front of me, giving me a preview of how my photos and videos would turn out. It was like having my own personal picture-in-picture feature like you would on a TV.
I found myself experiencing some cognitive dissonance at times as my eyes were constantly figuring out what to focus on due to the display always sitting just outside the center of my field of view. If you’ve ever taken a vision test that involves identifying when you see squiggly lines appearing in your periphery, you have a sense of what I was feeling.
Besides pinching, the Meta Ray-Ban Display glasses can also be controlled using the Meta AI voice assistant, just as users can with the device’s predecessors.
When I took a photo of some of the paintings decorating the demo room’s halls, I was told by support staff to ask Meta AI to explain to me what I was looking at. Presumably, Meta AI would have told me I was looking at various paintings from the Bauhaus art movement, but the digital assistant never activated correctly before I was escorted to another part of the demo.
I could see the Meta Ray-Ban Display’s live captions feature being helpful in noisy situations, as it successfully picked up the voice of the demo’s tour guide while dance music from the Connect event blared in the background. When he said “Let’s all head to the next room,” I saw his words appear in the display like closed-captions on a TV show.
But ultimately, I was most drawn to the wristband, particularly when I listened to some music with the glasses via Spotify. By rotating my thumb and index finger as if I was turning an invisible stereo knob, I was able to adjust the volume, an expectedly delightful experience.
It was this neural wristband that really drilled into my brain how much cutting-edge technology has been crammed into the new Meta Ray-Ban Display glasses. And while the device’s high price may turn off consumers, the glasses are novel enough to potentially attract developers seeking more computing platforms to build apps for.
Navan, the business travel, payments, and expense management startup, filed on Friday afternoon to go public.
Its S-1 filing with the Securities and Exchange Commission indicates that the company plans to list on the Nasdaq Global Select Market under the symbol “NAVN.”
Navan reported trailing 12-month revenue of $613 million (up 32%) across over 10,000 customers, and gross bookings of $7.6 billion (up 34%), according to the S-1 filing.
Goldman Sachs and Citigroup will act as lead book-running managers for the proposed offering.
Navan ranked No. 39 on this year’s CNBC Disruptor 50 list, and also made the 2024 list.
The IPO market has bounced back this year, with deal activity up 56% across 156 deals (roughly 200 IPO filings in all) and $30 billion in proceeds, up over 23% year over year, according to IPO tracker Renaissance Capital. It has been the best year for IPOs since 2021, though still far below the Covid offering boom years, when over $142 billion (2021) and $78 billion (2020) was raised by IPOs.
This year’s deal flow has been highlighted by hot AI names like Coreweave, as well as some of the startup world’s most highly valued firms from the past decade, such as fintech Klarna and design firm Figma, crypto companies Circle, Bullish and Gemini, and some long-awaited IPO candidates finally hitting the market, such as Stubhub this week, though its shares have slumped since the first day of trading. Top Amazon reseller Pattern went public on Friday.
Launched by CEO Ariel Cohen and co-founder Ilan Twig in 2015, Navan set out to disrupt a business travel sector where incumbents relied on clunky legacy tools and fragmented workflows.
The Palo Alto-based company, formerly called TripActions, refers to itself as an “all-in-one super app” for corporate travel and expenses.
Customers include Unilever, Adobe, Christie’s, Blue Origin and Geico.
It has also been pushing further into AI, with a virtual assistant named Ava handling approximately 50% of user interactions during the six months ended July 31, according to the filing, and a proprietary AI framework called Navan Cognition supporting its platform, as well as proprietary cloud infrastructure.
“We built Navan for the road warriors, for CEOs and CFOs who understand travel’s critical importance to their strategy, the finance teams who demand precision and control, the executive assistants juggling itineraries, and the program admins ensuring seamless events,” the co-founders wrote in an IPO filing letter.
“We saw firsthand the frustration of clunky, outdated systems. Travelers were forced to cobble together solutions, wait for hours on hold to book or change travel, and negotiate with travel agents. They struggled to adhere to company policies, with little visibility into those policies, and after all that, they spent even more time on tedious expense reports after a trip. We felt the pain of finance teams struggling to gain visibility into fragmented travel spending and to enforce policies, and the frustration of suppliers unable to connect directly with the high-value business travelers they sought to serve,” they wrote in the filing.
Revenue grew 33% year-over-year from $402 million in fiscal 2024 to $537 million in fiscal 2025, according to the S-1 filing. The company reported a net loss that decreased 45% year-over-year from $332 million in fiscal 2024 to $181 million in fiscal 2025. Gross margin improved from 60% in fiscal 2024 to 68% in fiscal 2025.
The business travel and expense space is crowded, with fellow Disruptors Ramp and Brex, and TravelPerk, as well as incumbents like SAP Concur and American Express Global Business Travel.
Sign up for our weekly, original newsletter that goes beyond the annual Disruptor 50 list, offering a closer look at list-making companies and their innovative founders.