Connect with us

Published

on

The U.S. Supreme Court against a blue sky in Washington, D.C., US. Photographer: Stefani Reynolds/Bloomberg

Bloomberg Creative | Bloomberg Creative Photos | Getty Images

A legal test that Google’s lawyer told the Supreme Court was roughly “96% correct” could drastically undermine the liability shield that the company and other tech platforms have relied on for decades, according to several experts who advocate for upholding the law to the highest degree.

The so-called “Henderson test” would significantly weaken the power of Section 230 of the Communications Decency Act, several experts said in conversations and briefings following oral arguments in the case Gonzalez v. Google. Some of those who criticized Google’s concession even work for groups backed by the company.

Section 230 is the statute that protects tech platforms’ ability to host material from users — like social media posts, uploaded video and audio files, and comments — without being held legally liable for their content. It also allows platforms to moderate their services and remove posts they consider objectionable.

The law is central to the question that will be decided by the Supreme Court in the Gonzalez case, which asks whether platforms like Google’s YouTube can be held responsible for algorithmicaly recommending user posts that seem to endorse or promote terrorism.

In arguments on Tuesday, the justices seemed hesitant to issue a ruling that would overhaul Section 230.

But even if they avoid commenting on that law, they could still issue caveats that change the way it’s enforced, or clear a path for changing the law in the future.

What is the Henderson test?

One way the Supreme Court could undercut Section 230 is by endorsing the Henderson test, some advocates believe. Ironically, Google’s own lawyers may have given the court more confidence to endorse this test, if it chooses to do so.

The Henderson test came about from a November ruling by the Fourth Circuit appeals court in Henderson v. The Source for Public Data. The plaintiffs in that case sued a group of companies that collect public information about individuals, like criminal records, voting records and driving information, then put in a database that they sell to third parties. The plaintiffs alleged that the companies violated the Fair Credit Reporting Act by failing to maintain accurate information, and by providing inaccurate information to a potential employer.

A lower court ruled that Section 230 barred the claims, but the appeals court overturned that decision.

The appeals court wrote that for Section 230 protection to apply, “we require that liability attach to the defendant on account of some improper content within their publication.”

In this case, it wasn’t the content itself that was at fault, but how the company chose to present it.

The court also ruled Public Data was responsible for the content because it decided how to present it, even though the information was pulled from other sources. The court said it’s plausible that some of the information Public Data sent to one of the plaintiff’s potential employers was “inaccurate because it omitted or summarized information in a way that made it misleading.” In other words, once Public Data made changes to the information it pulled, it became an information content provider.

Should the Supreme Court endorse the Henderson ruling, it would effectively “moot Section 230,” said Jess Miers, legal advocacy counsel for Chamber of Progress, a center-left industry group that counts Google among its backers. Miers said this is because Section 230’s primary advantage is to help quickly dismiss cases against platforms that center on user posts.

“It’s a really dangerous test because, again, it encourages plaintiffs to then just plead their claims in ways that say, well, we’re not talking about how improper the content is at issue,” Miers said. “We’re talking about the way in which the service put that content together or compiled that content.”

Eric Goldman, a professor at Santa Clara University School of Law, wrote on his blog that Henderson would be a “disastrous ruling if adopted by SCOTUS.”

“It was shocking to me to see Google endorse a Henderson opinion, because it’s a dramatic narrowing of Section 230,” Goldman said at a virtual press conference hosted by Chamber of Progress after the arguments. “And to the extent that the Supreme Court takes that bait and says, ‘Henderson’s good to Google, it’s good to us,’ we will actually see a dramatic narrowing of Section 230 where plaintiffs will find lots of other opportunities to to bring cases that are based on third-party content. They’ll just say that they’re based on something other than the harm that was in the third party content itself.”

Google pointed to the parts of its brief in the Gonzalez case that discuss the Henderson test. In the brief, Google attempts to distinguish the actions of a search engine, social media site, or chat room that displays snippets of third-party information from those of a credit-reporting website, like those at issue in Henderson.

In the case of a chatroom, Google says, although the “operator supplies the organization and layout, the underlying posts are still third-party content,” meaning it would be covered by Section 230.

“By contrast, where a credit-reporting website fails to provide users with its own required statement of consumer rights, Section 230(c)(1) does not bar liability,” Google wrote. “Even if the website also publishes third-party content, the failure to summarize consumer rights and provide that information to customers is the website’s act alone.”

Google also said 230 would not apply to a website that “requires users to convey allegedly illegal preferences,” like those that would violate housing law. That’s because by “‘materially contributing to [the content’s] unlawfulness,’ the website makes that content its own and bears responsibility for it,” Google said, citing the 2008 Fair Housing Council of San Fernando Valley v. Roommates.com case.

Concerns over Google’s concession

Section 230 experts digesting the Supreme Court arguments were perplexed by Google’s lawyer’s decision to give such a full-throated endorsement of Henderson. In trying to make sense of it, several suggested it might have been a strategic decision to try to show the justices that Section 230 is not a boundless free pass for tech platforms.

But in doing so, many also felt Google went too far.

Cathy Gellis, who represented amici in a brief submitted in the case, said at the Chamber of Progress briefing that Google’s lawyer was likely looking to illustrate the line of where Section 230 does and does not apply, but “by endorsing it as broadly, it endorsed probably more than we bargained for, and certainly more than necessarily amici would have signed on for.”

Corbin Barthold, internet policy counsel at Google-backed TechFreedom, said in a separate press conference that the idea Google may have been trying to convey in supporting Henderson wasn’t necessarily bad on its own. He said they seemed to try to make the argument that even if you use a definition of publication like Henderson lays out, organizing information is inherent to what platforms do because “there’s no such thing as just like brute conveyance of information.”

But in making that argument, Barthold said, Google’s lawyer “kind of threw a hostage to fortune.”

“Because if the court then doesn’t buy the argument that Google made that there’s actually no distinction to be had here, it could go off in kind of a bad direction,” he added.

Miers speculated that Google might have seen the Henderson case as a relatively safe one to cite, given than it involves an alleged violation of the Fair Credit Reporting Act, rather than a question of a user’s social media post.

“Perhaps Google’s lawyers were looking for a way to show the court that there are limits to Section 230 immunity,” Miers said. “But I think in in doing so, that invites some pretty problematic reading readings into the Section 230 immunity test, which can have pretty irreparable results for future internet law litigation.”

WATCH: Why the Supreme Court’s Section 230 case could reshape the internet

Why the Supreme Court's Section 230 case could reshape the internet

Continue Reading

Technology

Salesforce’s Agentforce software is coming to OpenAI’s ChatGPT later this year

Published

on

By

Salesforce's Agentforce software is coming to OpenAI's ChatGPT later this year

Salesforce CEO Marc Benioff participates in an interview during the World Economic Forum in Davos, Switzerland, on Jan. 22, 2025.

Chris Ratcliffe | Bloomberg | Getty Images

Salesforce is ramping up partnerships with leaders in generative artificial intelligence as investors continue to fear that the software company faces business risks due to the rapid growth of AI.

Just ahead of its annual Dreamforce conference in San Francisco, Salesforce said Tuesday it will enable the use of AI models from OpenAI and Anthropic inside its Agentforce 360 software. A day earlier, Salesforce expanded Agentforce beyond text chats to also handle voice calls.

“The way people are going to interact with software is going to fundamentally shift,” said Brian Landsman, CEO of Salesforce’s AppExchange business and executive vice president of partnerships, in an interview. The interaction could be in ChatGPT or in Slack, he said.

Salesforce will collaborate with Anthropic to bring Agentforce 360 into Claude, Landsman added.

Shares of Salesforce are down about 26% this year, while the S&P 500 index has gained 13%, as Wall Street seeks faster revenue growth from the cloud software company. So far, Agentforce revenue has been “modest,” Morgan Stanley analysts, who have the equivalent of a buy rating on Salesforce, wrote in a Monday note.

Large software companies are increasingly turning to popular AI model developers for new capabilities. Atlassian, Datadog and Intuit have previously signed deals with OpenAI, and Microsoft has invested almost $14 billion in the company. In September, Databricks committed to spending $100 million on OpenAI models.

As part of Salesforce’s announcement, customers will be able to access corporate information in Agentforce 360 and create charts in Tableau through the ChatGPT assistant, which has more than 800 million weekly users. Last week OpenAI announced a software development kit for integrating third-party applications into ChatGPT.

Companies working with both OpenAI and Salesforce will be able to sell products through ChatGPT’s instant checkout feature later in 2025. Salesforce plans to work with Anthropic on selling products for regulated industries, starting with financial services.

OpenAI said last month that ChatGPT users would be able to purchase products from U.S. Etsy sellers and Shopify merchants.

Meanwhile, Salesforce said its engineering organization is adopting Anthropic’s Claude Code programming product.

“We plan to continue to go much deeper with these partners over time,” Landsman said.

Salesforce CEO Marc Benioff has been defending his company’s position in the AI boom. And on last month’s earnings call, he said Anthropic and OpenAI both use Salesforce tools.

“All these next-generation AI companies ranging from OpenAI to Anthropic to everyone are on Slack,” Benioff, who is also Salesforce’s co-founder, told analysts. “And it is incredible how they’ve used that as their operating system and as their platform to run their companies.”

WATCH: Salesforce CEO Marc Benioff goes one-on-one with Jim Cramer

Salesforce CEO Marc Benioff goes one-on-one with Jim Cramer

Continue Reading

Technology

Instagram rolls out PG-13 content guidelines for teenage users

Published

on

By

Instagram rolls out PG-13 content guidelines for teenage users

Instagram has installed a new privacy setting which will default all new and existing underage accounts to an automatic private mode.

Brandon Bell | Getty Images

Meta will now limit the content that teenage users can see on Instagram to what they would typically encounter in a movie rated PG-13, the social media company said Tuesday.

With the new content guidelines, Meta said it will hide certain accounts from teenagers, including those that share sexualized content or media related to drugs and alcohol. Additionally, teenagers on Instagram will not be recommended posts that contain swear words, though teen users can still search for it.

The changes come after the company has faced waves of criticism over its handling of child-safety and related mental health concerns on its platform.

As part of the changes, Instagram accounts with names or biographies with links to adult-themed websites like OnlyFans or liquor stores will be hidden from teens, the company said. Teen Instagram users will no longer able to follow those kinds of accounts, and if they already do, they will be unable to see or interact with the more adult-leaning content that they share.

Meta executives said during a media briefing that while the company’s previous content guidelines were already in line or exceeded PG-13 standards, some parents said they were confused about what kinds of content teens could view on Instagram. To provide clarity, Meta decided to more closely standardize its teen-content policies with movie ratings that parents could better understand, the executives said.

“We decided to more closely align our policies with an independent standard that parents are familiar with, so we reviewed our age-appropriate guidelines against PG-13 movie ratings and updated them accordingly,” the company said in a blog post. “While of course there are differences between movies and social media, we made these changes so teens’ experience in the 13+ setting feels closer to the Instagram equivalent of watching a PG-13 movie.”

The social media company has come under fire from lawmakers who claim that it fails to adequately police its platform for child-safety related issues.

The company then known as Facebook came under fire in 2021 when The Wall Street Journal published a report citing internal company research that showed how harmful Instagram was for teenage girls specifically. Other reports have also shown how easily teenagers can use Instagram to find drugs, including through ads run by the company.

Over the past year, Meta has rolled out several features intended to provide parents more transparency about how their teenagers are using the company’s apps. In July, Meta debuted new safety tools intended to make it easier for teenage Instagram users to block and report accounts as well as receive more information about who they interact with on the platform.

In August, the watchdog Tech Transparency Project released a report that alleged Meta’s ties and sponsorship of the National Parent Teacher Association “gives a sheen of expert approval” to its “efforts to keep young users engaged on its platforms.” The National PTA said in a statement that it doesn’t endorse any social media platform, while Meta said at the time that it is “proud to partner with expert organizations to educate parents about our safety tools and protections for teens, as many other tech companies do.”

Meta said its new Instagram content guidelines will begin rolling out Tuesday in the U.S., UK, Australia and Canada before expanding to other regions.

WATCH: Is an AI Bubble Brewing?

How investors can think about the potential AI market bubble

Continue Reading

Technology

California just passed new AI and social media laws. Here’s what they mean for Big Tech

Published

on

By

California just passed new AI and social media laws. Here's what they mean for Big Tech

Governor Gavin Newsom speaks at Google San Francisco office about ‘Creating an AI-Ready Workforce’ that new joint effort with some of the world’s leading tech companies to help better prepare California’s students and workers for the next generation of technology, in San Francisco, California, United States on August 7, 2025.

Tayfun Coskun | Anadolu | Getty Images

California Gov. Gavin Newsom signed a series of bills Monday targeting child online safety as concerns over the risks associated with artificial intelligence and social media use keep mounting.

“We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way,” he said in a release. “Our children’s safety is not for sale.”

The latest legislation comes as the AI craze ushers in a wave of more complex chatbots capable of deep, intellectual conversation and encouraging behaviors. Across age groups, people are leaning on AI for emotional support, companionship and in some cases, romantic connections.

A recent survey from Fractl Agents found that one in six Americans rely on chatbots and worry that losing access would stunt them emotionally and professionally. More than a fifth of respondents reported having an emotional connection with their chatbot.

Many lawmakers have called for laws requiring Big Tech to better protect against chatbots promoting unsafe behaviors such as suicide and self-harm on their platforms.

The bills signed into law by Newsom on Monday are intended to address some of those concerns.

The changes

One of the laws passed by California implements a series of safeguards geared toward AI chatbots.

SB 243 is the first state law of its kind and requires chatbots to disclose that they are AI and tell minors every three hours to “take a break.” Chatbots makers will also need to implement tools to protect against harmful behaviors and disclose certain instances to a crisis hotline.

The law allows California to maintain its lead in innovation while also holding companies accountable and prioritizing safety, Newsom said in a release.

In a statement to CNBC, OpenAI called the law a “meaningful move forward” for AI safety standards.

“By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,” the company said.

Another bill signed by Newsom, AB 56, requires that social media platforms including Instagram and Snapchat to add labels that warn users of the potential mental health risks associated with using those types of apps. AB 621, meanwhile, heighten penalties for companies whose platforms distribute deepfake pornography.

The other key law, known as AB 1043, requires that device makers, like Apple and Google, implement tools to verify user ages in their app stores. Some Big Tech companies have already endorsed the law’s safeguards, including Google and Meta.

Last month, Kareem Ghanem, Google’s senior director of government and affairs and public policy, called AB 1043 one of the “most thoughtful approaches” to keeping children safe online.

The impact to big tech

The new laws require a series of changes to many long-standing business models. But D.A. Davidson’s Gil Luria said companies should experience a “distributed” impact from these new measures, since all businesses are forced to accommodate the rules.

“For AI chats the timing is beneficial since these companies are still working out their business models and will now accommodate a more restrictive approach at the outset,” he said.

Other countries have already enacted rules tougher restrictions on AI. Last year, the European Union passed the AI Act that includes fines for companies that violate the laws’ framework that includes a social scoring systems.

Utah and Texas have also signed laws implementing AI safeguards for minors. The Utah law, for example, requires that Apple and Google to verify user ages and it requires parental permission for those under 18 to use certain apps. These laws have also raised questions over whether harsh restrictions violate free speech or bans are the most effective solution.

California isn’t the first jurisdiction to pass laws like these, but Newsom’s signings carry significance due to the size of the state’s population and the fact that many tech companies are based in the San Francisco Bay Area.

WATCH: Why it’s time to take AI-human relationships seriously

Why it’s time to take AI-human relationships seriously

Continue Reading

Trending