Connect with us

Published

on

The U.S. Supreme Court against a blue sky in Washington, D.C., US. Photographer: Stefani Reynolds/Bloomberg

Bloomberg Creative | Bloomberg Creative Photos | Getty Images

A legal test that Google’s lawyer told the Supreme Court was roughly “96% correct” could drastically undermine the liability shield that the company and other tech platforms have relied on for decades, according to several experts who advocate for upholding the law to the highest degree.

The so-called “Henderson test” would significantly weaken the power of Section 230 of the Communications Decency Act, several experts said in conversations and briefings following oral arguments in the case Gonzalez v. Google. Some of those who criticized Google’s concession even work for groups backed by the company.

Section 230 is the statute that protects tech platforms’ ability to host material from users — like social media posts, uploaded video and audio files, and comments — without being held legally liable for their content. It also allows platforms to moderate their services and remove posts they consider objectionable.

The law is central to the question that will be decided by the Supreme Court in the Gonzalez case, which asks whether platforms like Google’s YouTube can be held responsible for algorithmicaly recommending user posts that seem to endorse or promote terrorism.

In arguments on Tuesday, the justices seemed hesitant to issue a ruling that would overhaul Section 230.

But even if they avoid commenting on that law, they could still issue caveats that change the way it’s enforced, or clear a path for changing the law in the future.

What is the Henderson test?

One way the Supreme Court could undercut Section 230 is by endorsing the Henderson test, some advocates believe. Ironically, Google’s own lawyers may have given the court more confidence to endorse this test, if it chooses to do so.

The Henderson test came about from a November ruling by the Fourth Circuit appeals court in Henderson v. The Source for Public Data. The plaintiffs in that case sued a group of companies that collect public information about individuals, like criminal records, voting records and driving information, then put in a database that they sell to third parties. The plaintiffs alleged that the companies violated the Fair Credit Reporting Act by failing to maintain accurate information, and by providing inaccurate information to a potential employer.

A lower court ruled that Section 230 barred the claims, but the appeals court overturned that decision.

The appeals court wrote that for Section 230 protection to apply, “we require that liability attach to the defendant on account of some improper content within their publication.”

In this case, it wasn’t the content itself that was at fault, but how the company chose to present it.

The court also ruled Public Data was responsible for the content because it decided how to present it, even though the information was pulled from other sources. The court said it’s plausible that some of the information Public Data sent to one of the plaintiff’s potential employers was “inaccurate because it omitted or summarized information in a way that made it misleading.” In other words, once Public Data made changes to the information it pulled, it became an information content provider.

Should the Supreme Court endorse the Henderson ruling, it would effectively “moot Section 230,” said Jess Miers, legal advocacy counsel for Chamber of Progress, a center-left industry group that counts Google among its backers. Miers said this is because Section 230’s primary advantage is to help quickly dismiss cases against platforms that center on user posts.

“It’s a really dangerous test because, again, it encourages plaintiffs to then just plead their claims in ways that say, well, we’re not talking about how improper the content is at issue,” Miers said. “We’re talking about the way in which the service put that content together or compiled that content.”

Eric Goldman, a professor at Santa Clara University School of Law, wrote on his blog that Henderson would be a “disastrous ruling if adopted by SCOTUS.”

“It was shocking to me to see Google endorse a Henderson opinion, because it’s a dramatic narrowing of Section 230,” Goldman said at a virtual press conference hosted by Chamber of Progress after the arguments. “And to the extent that the Supreme Court takes that bait and says, ‘Henderson’s good to Google, it’s good to us,’ we will actually see a dramatic narrowing of Section 230 where plaintiffs will find lots of other opportunities to to bring cases that are based on third-party content. They’ll just say that they’re based on something other than the harm that was in the third party content itself.”

Google pointed to the parts of its brief in the Gonzalez case that discuss the Henderson test. In the brief, Google attempts to distinguish the actions of a search engine, social media site, or chat room that displays snippets of third-party information from those of a credit-reporting website, like those at issue in Henderson.

In the case of a chatroom, Google says, although the “operator supplies the organization and layout, the underlying posts are still third-party content,” meaning it would be covered by Section 230.

“By contrast, where a credit-reporting website fails to provide users with its own required statement of consumer rights, Section 230(c)(1) does not bar liability,” Google wrote. “Even if the website also publishes third-party content, the failure to summarize consumer rights and provide that information to customers is the website’s act alone.”

Google also said 230 would not apply to a website that “requires users to convey allegedly illegal preferences,” like those that would violate housing law. That’s because by “‘materially contributing to [the content’s] unlawfulness,’ the website makes that content its own and bears responsibility for it,” Google said, citing the 2008 Fair Housing Council of San Fernando Valley v. Roommates.com case.

Concerns over Google’s concession

Section 230 experts digesting the Supreme Court arguments were perplexed by Google’s lawyer’s decision to give such a full-throated endorsement of Henderson. In trying to make sense of it, several suggested it might have been a strategic decision to try to show the justices that Section 230 is not a boundless free pass for tech platforms.

But in doing so, many also felt Google went too far.

Cathy Gellis, who represented amici in a brief submitted in the case, said at the Chamber of Progress briefing that Google’s lawyer was likely looking to illustrate the line of where Section 230 does and does not apply, but “by endorsing it as broadly, it endorsed probably more than we bargained for, and certainly more than necessarily amici would have signed on for.”

Corbin Barthold, internet policy counsel at Google-backed TechFreedom, said in a separate press conference that the idea Google may have been trying to convey in supporting Henderson wasn’t necessarily bad on its own. He said they seemed to try to make the argument that even if you use a definition of publication like Henderson lays out, organizing information is inherent to what platforms do because “there’s no such thing as just like brute conveyance of information.”

But in making that argument, Barthold said, Google’s lawyer “kind of threw a hostage to fortune.”

“Because if the court then doesn’t buy the argument that Google made that there’s actually no distinction to be had here, it could go off in kind of a bad direction,” he added.

Miers speculated that Google might have seen the Henderson case as a relatively safe one to cite, given than it involves an alleged violation of the Fair Credit Reporting Act, rather than a question of a user’s social media post.

“Perhaps Google’s lawyers were looking for a way to show the court that there are limits to Section 230 immunity,” Miers said. “But I think in in doing so, that invites some pretty problematic reading readings into the Section 230 immunity test, which can have pretty irreparable results for future internet law litigation.”

WATCH: Why the Supreme Court’s Section 230 case could reshape the internet

Why the Supreme Court's Section 230 case could reshape the internet

Continue Reading

Technology

OpenAI temporarily blocked from using ‘Cameo’ after trademark lawsuit

Published

on

By

OpenAI temporarily blocked from using 'Cameo' after trademark lawsuit

Dado Ruvic | Reuters

OpenAI will not be allowed use the word “cameo” to name any products or features in its Sora app for a month after a federal judge placed a temporary restraining order for the term on the AI startup.

U.S. District Judge Eumi K. Lee granted a temporary restraining order on Monday, blocking OpenAI from using the “cameo” mark or similar words like “Kameo” or “CameoVideo” for any function related to Sora, the company’s AI-generated video app.

“We disagree with the complaint’s assertion that anyone can claim exclusive ownership over the word ‘cameo’, and we look forward to continuing to make our case to the court,” an OpenAI spokesperson told CNBC.

Lee granted the order after OpenAI was sued in October by Cameo, a platform that allows users to purchase personalized videos from celebrities. Cameo filed a trademark lawsuit against the artificial intelligence company following the launch of Sora’s “Cameo” feature, which allowed users to generate characters of themselves or others and insert them into videos.

“We are gratified by the court’s decision, which recognizes the need to protect consumers from the confusion that OpenAI has created by using the Cameo trademark,” Cameo CEO Steven Galanis said in a statement. “While the court’s order is temporary, we hope that OpenAI will agree to stop using our mark permanently to avoid any further harm to the public or Cameo.”

The order is set to expire on Dec. 22, and a hearing for whether the halt should be made permanent is scheduled for Dec. 19.

Cameo CEO on OpenAI lawsuit: Problem is using our name, not Sora AI

Continue Reading

Technology

OpenAI announces shopping research tool in latest e-commerce push

Published

on

By

OpenAI announces shopping research tool in latest e-commerce push

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025.

Kyle Grillot | Bloomberg | Getty Images

OpenAI announced a new tool called “shopping research” on Monday, right as consumers will be ramping up spending ahead of the holiday season.

The startup said the tool is designed for ChatGPT users who are looking for detailed, well-researched shopping guides. The guides include top products, key differences between the products and the latest information from retailers, according to a blog.

Users will be able to tailor their guides based on their budget, what features they care about and who they are shopping for. OpenAI said it will take a couple of minutes to generate answers with shopping research, so users who are looking for simple answers like a price check can still rely on a regular ChatGPT response.

When users submit prompts to ChatGPT that say things like, “Find the quietest cordless stick vacuum for a small apartment,” or “I need a gift for my four year old niece who loves art,” they will see the shopping research tool pop up automatically, OpenAI said. The tool can also be accessed from the menu.

OpenAI has been pushing deeper into e-commerce in recent months. The company introduced a feature called Instant Checkout in September that allows users to make purchases directly from eligible merchants through ChatGPT.

Shopping research users will be able to make purchases with Instant Checkout in the future, OpenAI said on Monday.

OpenAI said its shopping research results are organic and based on publicly available retail websites, and that it will not share users’ chats with retailers. It’s possible that shopping research will make mistakes around product availability and pricing, the company said.

Shopping research is rolling out to OpenAI’s Free, Go, Plus and Pro users who are logged in to ChatGPT.

WATCH: OpenAI taps Foxconn to build AI hardware in the U.S.

OpenAI taps Foxconn to build AI hardware in the U.S.

Continue Reading

Technology

Tesla fans told by Dutch safety regulator to stop pressuring agency on ‘FSD Supervised’

Published

on

By

Tesla fans told by Dutch safety regulator to stop pressuring agency on 'FSD Supervised'

A Tesla logo outside the company’s Tilburg Factory and Delivery Center.

Karol Serewis | Getty Images

Tesla is trying to get its “FSD Supervised” technology approved for use in the Netherlands. But Dutch regulators are telling Tesla fans to stop pressuring safety authority RDW on the matter, and that their efforts will have “no influence” on the ultimate decision.

The RDW issued a statement on Monday directed at those who have been sending messages to try and get the agency to clear Tesla’s premium partially automated driving system, marketed in the U.S. as the Full Self-Driving (Supervised) option. It’s not yet available for use in the Netherlands or Europe broadly.

“We thank everyone who has already done so and would like to ask everyone not to contact us about this,” the agency said. “It takes up unnecessary time for our customer service. Moreover, this will have no influence on whether or not the planning is met. Road safety is the RDW’s top priority: admission is only possible once the safety of the system has been convincingly demonstrated.”

The regulator said it will make a decision only after Elon Musk’s company shows that the technology meets the country’s stringent vehicle safety standards. The RDW has booked a schedule allowing Tesla to demonstrate its systems, and said it could decide on authorization as early as February.

Last week, Tesla posted on X encouraging its followers to contact RDW to express their wishes to have the systems approved.

The post claimed, “RDW has committed to granting Netherlands National approval in February 2026,” adding a message to “please contact them via link below to express your excitement & thank them for making this happen as soon as possible.” Tesla said other EU countries could then follow suit.

The RDW corrected Tesla on Monday, saying in a statement on its official website, that such approval is not guaranteed and had not been promised.

Tesla didn’t immediately respond to a request for comment.

In the U.S., the National Highway Traffic Safety Administration opened an investigation into Tesla’s FSD-equipped vehicles in October following reports of widespread traffic violations tied to use of the systems.

The cars Tesla sells today, even with FSD Supervised engaged, require a human driver ready to brake or steer at any time.

For years, Musk has promised that Tesla customers would soon be able to turn their existing electric vehicles into robotaxis, capable of generating income for owners while they sleep or go on vacation, with a simple software update.

That hasn’t happened yet, and Tesla has since informed owners that future upgrades will require new hardware as well as software releases.

Tesla is testing a Robotaxi-brand ride-hailing service in Texas and elsewhere, but it includes human safety drivers or supervisors on board who either conduct the drives or manually intervene as needed. Musk has said the company aims to remove human driers in Austin, Texas, by the end of 2025.

WATCH: Tesla bear on company’s EV business

Tesla's EV business is not worth even 20% of stock price: Why TSLA bulls 'have it wrong'

Continue Reading

Trending