Connect with us

Published

on

The U.S. Supreme Court against a blue sky in Washington, D.C., US. Photographer: Stefani Reynolds/Bloomberg

Bloomberg Creative | Bloomberg Creative Photos | Getty Images

A legal test that Google’s lawyer told the Supreme Court was roughly “96% correct” could drastically undermine the liability shield that the company and other tech platforms have relied on for decades, according to several experts who advocate for upholding the law to the highest degree.

The so-called “Henderson test” would significantly weaken the power of Section 230 of the Communications Decency Act, several experts said in conversations and briefings following oral arguments in the case Gonzalez v. Google. Some of those who criticized Google’s concession even work for groups backed by the company.

Section 230 is the statute that protects tech platforms’ ability to host material from users — like social media posts, uploaded video and audio files, and comments — without being held legally liable for their content. It also allows platforms to moderate their services and remove posts they consider objectionable.

The law is central to the question that will be decided by the Supreme Court in the Gonzalez case, which asks whether platforms like Google’s YouTube can be held responsible for algorithmicaly recommending user posts that seem to endorse or promote terrorism.

In arguments on Tuesday, the justices seemed hesitant to issue a ruling that would overhaul Section 230.

But even if they avoid commenting on that law, they could still issue caveats that change the way it’s enforced, or clear a path for changing the law in the future.

What is the Henderson test?

One way the Supreme Court could undercut Section 230 is by endorsing the Henderson test, some advocates believe. Ironically, Google’s own lawyers may have given the court more confidence to endorse this test, if it chooses to do so.

The Henderson test came about from a November ruling by the Fourth Circuit appeals court in Henderson v. The Source for Public Data. The plaintiffs in that case sued a group of companies that collect public information about individuals, like criminal records, voting records and driving information, then put in a database that they sell to third parties. The plaintiffs alleged that the companies violated the Fair Credit Reporting Act by failing to maintain accurate information, and by providing inaccurate information to a potential employer.

A lower court ruled that Section 230 barred the claims, but the appeals court overturned that decision.

The appeals court wrote that for Section 230 protection to apply, “we require that liability attach to the defendant on account of some improper content within their publication.”

In this case, it wasn’t the content itself that was at fault, but how the company chose to present it.

The court also ruled Public Data was responsible for the content because it decided how to present it, even though the information was pulled from other sources. The court said it’s plausible that some of the information Public Data sent to one of the plaintiff’s potential employers was “inaccurate because it omitted or summarized information in a way that made it misleading.” In other words, once Public Data made changes to the information it pulled, it became an information content provider.

Should the Supreme Court endorse the Henderson ruling, it would effectively “moot Section 230,” said Jess Miers, legal advocacy counsel for Chamber of Progress, a center-left industry group that counts Google among its backers. Miers said this is because Section 230’s primary advantage is to help quickly dismiss cases against platforms that center on user posts.

“It’s a really dangerous test because, again, it encourages plaintiffs to then just plead their claims in ways that say, well, we’re not talking about how improper the content is at issue,” Miers said. “We’re talking about the way in which the service put that content together or compiled that content.”

Eric Goldman, a professor at Santa Clara University School of Law, wrote on his blog that Henderson would be a “disastrous ruling if adopted by SCOTUS.”

“It was shocking to me to see Google endorse a Henderson opinion, because it’s a dramatic narrowing of Section 230,” Goldman said at a virtual press conference hosted by Chamber of Progress after the arguments. “And to the extent that the Supreme Court takes that bait and says, ‘Henderson’s good to Google, it’s good to us,’ we will actually see a dramatic narrowing of Section 230 where plaintiffs will find lots of other opportunities to to bring cases that are based on third-party content. They’ll just say that they’re based on something other than the harm that was in the third party content itself.”

Google pointed to the parts of its brief in the Gonzalez case that discuss the Henderson test. In the brief, Google attempts to distinguish the actions of a search engine, social media site, or chat room that displays snippets of third-party information from those of a credit-reporting website, like those at issue in Henderson.

In the case of a chatroom, Google says, although the “operator supplies the organization and layout, the underlying posts are still third-party content,” meaning it would be covered by Section 230.

“By contrast, where a credit-reporting website fails to provide users with its own required statement of consumer rights, Section 230(c)(1) does not bar liability,” Google wrote. “Even if the website also publishes third-party content, the failure to summarize consumer rights and provide that information to customers is the website’s act alone.”

Google also said 230 would not apply to a website that “requires users to convey allegedly illegal preferences,” like those that would violate housing law. That’s because by “‘materially contributing to [the content’s] unlawfulness,’ the website makes that content its own and bears responsibility for it,” Google said, citing the 2008 Fair Housing Council of San Fernando Valley v. Roommates.com case.

Concerns over Google’s concession

Section 230 experts digesting the Supreme Court arguments were perplexed by Google’s lawyer’s decision to give such a full-throated endorsement of Henderson. In trying to make sense of it, several suggested it might have been a strategic decision to try to show the justices that Section 230 is not a boundless free pass for tech platforms.

But in doing so, many also felt Google went too far.

Cathy Gellis, who represented amici in a brief submitted in the case, said at the Chamber of Progress briefing that Google’s lawyer was likely looking to illustrate the line of where Section 230 does and does not apply, but “by endorsing it as broadly, it endorsed probably more than we bargained for, and certainly more than necessarily amici would have signed on for.”

Corbin Barthold, internet policy counsel at Google-backed TechFreedom, said in a separate press conference that the idea Google may have been trying to convey in supporting Henderson wasn’t necessarily bad on its own. He said they seemed to try to make the argument that even if you use a definition of publication like Henderson lays out, organizing information is inherent to what platforms do because “there’s no such thing as just like brute conveyance of information.”

But in making that argument, Barthold said, Google’s lawyer “kind of threw a hostage to fortune.”

“Because if the court then doesn’t buy the argument that Google made that there’s actually no distinction to be had here, it could go off in kind of a bad direction,” he added.

Miers speculated that Google might have seen the Henderson case as a relatively safe one to cite, given than it involves an alleged violation of the Fair Credit Reporting Act, rather than a question of a user’s social media post.

“Perhaps Google’s lawyers were looking for a way to show the court that there are limits to Section 230 immunity,” Miers said. “But I think in in doing so, that invites some pretty problematic reading readings into the Section 230 immunity test, which can have pretty irreparable results for future internet law litigation.”

WATCH: Why the Supreme Court’s Section 230 case could reshape the internet

Why the Supreme Court's Section 230 case could reshape the internet

Continue Reading

Technology

Palo Alto tops earnings expectations, announces Chronosphere acquisition

Published

on

By

Palo Alto tops earnings expectations, announces Chronosphere acquisition

Chief executive officer at Palo Alto Networks Inc., Nikesh Arora attends the 9th edition of the VivaTech trade show at the Parc des Expositions de la Porte de Versailles on June 11, 2025, in Paris.

Chesnot | Getty Images

Palo Alto Networks beat Wall Street’s fiscal first-quarter estimates after the bell on Wednesday and announced plans to buy cloud observability platform Chronosphere for $3.35 billion.

The stock fell about 3%.

Here’s how the company did versus LSEG estimates:

  • Earnings per share: 93 cents adjusted vs. 89 cents expected
  • Revenue: $2.47 billion vs. $2.46 billion expected

Revenues grew 16% from $2.1 billion a year ago. Net income fell to $334 million, or 47 cents per share, from $351 million, or 49 cents per share in the year-ago period.

Palo Alto’s Chronosphere deal is slated to close in the second half of its fiscal 2026. The cybersecurity provider is also in the process of buying Israeli identity security firm CyberArk for $25 billion under CEO Nikesh Arora‘s acquisition spree.

He told investors in an earnings call that Palo Alto is making this simultaneous acquisition to address the fast-moving AI cycle.

“This large surge towards building AI compute is causing a lot of the AI players to think about newer models for software stacks and infrastructure stacks in the future,” he said.

Palo Alto guided for revenues between $2.57 billion and $2.59 billion in the second quarter, the midpoint of which was in line with a $2.58 billion estimate. For the full year, the company expects $10.50 billion to $10.54 billion, versus a $10.51 billion estimate.

Capital expenditures during the period were much higher than expectations at $84 million. StreetAccount expected $58.1 million. Remaining purchase obligations, which tracks backlog, grew to $15.5 billion and topped a $15.43 billion estimate.

The rise of artificial intelligence has also stirred up increasingly sophisticated cyberattacks and contributed to tools for customers. The Santa Clara, California-based company has infused AI into its tools and launched automated AI agents to help fend off attacks in October.

Read more CNBC tech news

Continue Reading

Technology

Elon Musk’s xAI will be first customer for Nvidia-backed data center in Saudi Arabia

Published

on

By

Elon Musk's xAI will be first customer for Nvidia-backed data center in Saudi Arabia

Tesla CEO Elon Musk (L) talks with Nvidia CEO Jensen Huang during the U.S.-Saudi Investment Forum at the Kennedy Center on Nov. 19, 2025 in Washington, DC.

Win McNamee | Getty Images

Nvidia and xAI said on Wednesday that a large data center facility being built in Saudi Arabia and equipped with hundreds of thousands of Nvidia chips will count Elon Musk’s artificial intelligence startup as its first customer.

Musk and Nvidia CEO Jensen Huang were both in attendance at the U.S.-Saudi Investment Forum in Washington, D.C.

The announcement builds on a partnership from May, when Nvidia said it would provide Saudi Arabia’s Humain with chips that use 500 megawatts of power. On Wednesday, Humain said the project would include about 600,000 Nvidia graphics processing units.

Humain was launched earlier this year and is owned by the Saudi Public Investment Fund. The plan to build the data center was initially announced when Huang visited Saudi Arabia alongside President Donald Trump.

“Could you imagine, a startup company approximately 0 billion dollars in revenues, now going to build a data center for Elon,” Huang said.

The facility is one of the most prominent examples of what Nvidia calls “sovereign AI.” The chipmaker has said that nations will increasingly need to build data centers for AI in order to protect national security and their culture. It’s also a potentially massive market for Nvidia’s pricey AI chips beyond a handful of hyperscalers.

Huang’s appearance at an event supported by President Trump is another sign of the administration’s focus on AI. Huang has become friendly with the president as Nvidia lobbies to gain licenses to ship future AI chips to China.

When announcing the agreement, Musk, who was a major figure in the early days of the second Trump administration, briefly mixed up the size of the data center, which is measured in megawatts, a unit of power. He joked that plans for a data center that would be 1,000 times larger would have to wait.

“That will be eight bazillion, trillion dollars,” Musk joked.

Humain won’t just use Nvidia chips. Advanced Micro Devices and Qualcomm will also sell chips and AI systems to Humain. AMD CEO Lisa Su and Qualcomm CEO Cristiano Amon both attended a state dinner on Tuesday to honor Saudi Crown Prince Mohammed bin Salman.

AMD will provide chips that may require as much as 1 gigawatt of power by 2030. The company said the chips that it would provide are its Instinct MI450 GPUs for AI. Cisco will provide additional infrastructure for the data center, AMD said.

Qualcomm will sell Humain its new data center chips that were first revealed in October, called the AI200 and AI250. Humain will deploy 200 megawatts of Qualcomm chips, the company said.

WATCH: Qualcomm CEO on new AI chips

Qualcomm CEO on new AI chips: Trying to prepare for the next phase of AI data center growth

Continue Reading

Technology

Meta chief AI scientist Yann LeCun is leaving to create his own startup

Published

on

By

Meta chief AI scientist Yann LeCun is leaving to create his own startup

Yann LeCun, known as one of the godfathers of modern artificial intelligence and one of the first AI visionaries to join the company then known as Facebook, is leaving Meta.

LuCun said in a LinkedIn post on Wednesday that he plans to create a startup that specializes in a kind of AI technology that researchers have described as world models, analyzing information beyond web data in order to better represent the physical world and its properties.

“I am creating a startup company to continue the Advanced Machine Intelligence research program (AMI) I have been pursuing over the last several years with colleagues at FAIR, at NYU, and beyond,” LeCun wrote. “The goal of the startup is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.”

Meta will partner with LeCun’s startup.

The departure comes at a time of disarray within Meta’s AI unit, which was dramatically overhauled this year after the company released the fourth version of its Llama open-source large language model to a disappointing response from developers. That spurred CEO Mark Zuckerberg to spend billions of dollars recruiting top AI talent, including a June $14.5 billion investment in Scale AI to lure the startup’s 28-year-old CEO Alexandr Wang, now Meta’s new chief AI officer.

LeCun, 65, joined Facebook in 2013 to be director of the FAIR AI research division while maintaining a part-time professorial position at New York University. He said in the LinkedIn post that the “creation of FAIR is my proudest non-technical accomplishment.”

“I am extremely grateful to Mark Zuckerberg, Andrew Bosworth, Chris Cox, and Mike Schroepfer for their support of FAIR, and for their support of the AMI program over the last few years,” LeCun said. “Because of their continued interest and support, Meta will be a partner of the new company.”

At the time, Facebook and Google were heavily recruiting high-level academics like LeCun to spearhead their efforts to produce cutting-edge computer science research that could potentially benefit their core businesses and products.

LeCun, along with other AI luminaries like Yoshua Bengio and Geoffrey Hinton, centered their academic research on a kind of AI technique known as deep learning, which involves the training of enormous software systems called neural networks so they can discover patterns within reams of data. The researchers helped popularize the deep learning approach, and in 2019 won the prestigious Turing Award, presented by the Association for Computing Machinery.

Since then, LeCun’s approach to AI development has drifted from the direction taken by Meta and the rest of Silicon Valley.

Meta and other tech companies like OpenAI have spent billions of dollars in developing so-called foundation models, particularly LLMs, as part of their efforts to advance state-of-the-art computing. However, LeCun and other deep-learning experts, have said that these current AI models, while powerful, have a limited understanding of the world, and new computing architectures are needed for researchers to create software that’s on par with or surpasses humans on certain tasks, a notion known as artificial general intelligence.

“As I envision it, AMI will have far-ranging applications in many sectors of the economy, some of which overlap with Meta’s commercial interests, but many of which do not,” LeCun said in the post. “Pursuing the goal of AMI in an independent entity is a way to maximize its broad impact.”

Besides Wang, other recent notables that Zuckerberg brought in to revamp Meta’s AI unit include former GitHub CEO Nat Friedman, who heads the unit’s product team, and ChatGPT co-creator Shengjia Zhao, the group’s chief scientist.

In October, Meta laid off 600 employees from its Superintelligence Labs division, including some who were part of the FAIR unit that LeCun helped get off the ground. Those layoffs and other cuts to FAIR over the years, coupled with a new AI leadership team, played a major role in LeCun’s decision to leave, according to people familiar with the matter who asked not to be named because they weren’t authorized to speak publicly.

Additionally, LeCun rarely interacted with Wang nor TBD Labs unit, which is compromised of many of the headline-grabbing hires Zuckerberg made over the summer. TBD Labs oversees the development of Meta’s Llama AI models, which were originally developed within FAIR, the people said.

While LeCun was always a champion of sharing AI research and related technologies to the open-source community, Wang and his team favor a more closed approach amid intense competition from rivals like OpenAI and Google, the people said.

WATCH: Meta is a table pounder here.

Continue Reading

Trending