Connect with us

Published

on

A bipartisan pair of senators reintroduced the Kids Online Safety Act on Tuesday with updates that aimed to address concerns that the bill could inadvertently cause more harm to the young internet users it seeks to protect. But some activists who raised those issues say the changes are still insufficient.

The bill aims to make the internet a safer place for kids to access by putting the onus on social media companies to prevent and mitigate harms that might come from their services. The new version of the bill defines a set list of harms that platforms need to take reasonable steps to mitigate, including by preventing the spread of posts promoting suicide, eating disorders, substance abuse and more. It would require those companies to undergo annual independent audits of their risks to minors and require them to enable the strongest privacy settings by default for kids.

related investing news

A.I. mentions in earnings calls skyrocket this season. What executives are saying

CNBC Pro

Congress and President Joe Biden have made clear online protections for children are a key priority, and KOSA has become one of the leading bills on the subject. KOSA has racked up a long list of more than 25 co-sponsors and the earlier version of the bill passed unanimously out of the Senate commerce committee last year. The new version of the bill has gained support from groups such as Common Sense Media, the American Psychological Association, the American Academy of Pediatrics and the Eating Disorders Coalition.

At a virtual press conference on Tuesday, Sen. Richard Blumenthal, D-Conn., who introduced the bill alongside Sen. Marsha Blackburn, R-Tenn., said that Senate Majority Leader Chuck Schumer, D-N.Y., is “a hundred percent behind this bill and efforts to protect kids online.”

While Blumenthal acknowledged it’s ultimately up to Senate leadership to figure out timing, he said, “I fully hope and expect we’ll have a vote this session.”

A Schumer spokesperson did not immediately respond to a request for comment.

Late last year, dozens of civil society groups warned Congress against passing the bill, warning it could further endanger young internet users in different ways. For example, the groups worried the bill would add pressure for online platforms to “over-moderate, including from state Attorneys General seeking to make political points about what kind of information is appropriate for young people.”

Blumenthal and Blackburn made several changes to the text in response to critiques from outside groups. They sought to more carefully tailor the legislation to limit the duty of care requirements for social media platforms to a specific set of potential harms to mental health based on evidence-backed medical information.

They also added protections for support services like the National Suicide Hotline, substance abuse groups and LGBTQ youth centers to ensure they aren’t unintentionally hampered by the bill’s requirements. Blumenthal’s office said it did not believe the duty of care would have applied to those sorts of groups, but opted to clarify it regardless.

But the changes have not been enough to placate some civil society and industry groups.

Evan Greer, director of digital rights nonprofit Fight for the Future, said Blumenthal’s office never met with the group or shared the updated text in advance of the introduction despite multiple requests. Greer acknowledged the co-sponsors’ offices met with other groups, but said in an emailed statement that “it seems they intentionally excluded groups that have specific issue-area expertise in content moderation, algorithmic recommendation, etc.”

“I’ve read through it and can say unequivocally that the changes that have been made DO NOT address the concerns that we raised in our letter,” Greer wrote. “The bill still contains a duty of care that covers content recommendation, and it still allows state Attorneys General to effectively dictate what content platforms can recommend to minors.”

“The ACLU remains strongly opposed to KOSA because it would ironically expose the very children it seeks to protect to increased harm and increased surveillance,” ACLU Senior Policy Counsel Cody Venzke said in a statement. The group joined the letter warning against its passage last year.

“KOSA’s core approach still threatens the privacy, security, and free expression of both minors and adults by deputizing platforms of all stripes to police their users and censor their content under the guise of a ‘duty of care,'” Venzke added. “To accomplish this, the bill would legitimize platforms’ already pervasive data collection to identify which users are minors when it should be seeking to curb those data abuses. Moreover, parental guidance in minors’ online lives is critical, but KOSA would mandate surveillance tools without regard to minors’ home situations or safety. KOSA would be a step backwards in making the internet a safer place for children and minors.”

At the press conference, in response to a question about Fight for the Future’s critiques, Blumenthal said the duty of care had been “very purposefully narrowed” to target certain harms.

“I think we’ve met that kind of suggestion very directly and effectively,” he said. “Obviously, our door remains open. We’re willing to hear and talk to other kinds of suggestions that are made. And we have talked to many of the groups that had great criticism and a number have actually dropped their opposition, as I think you’ll hear in response to today’s session. So I think our bill is clarified and improved in a way that meets some of the criticism. We’re not going to solve all of the problems of the world with a single bill. But we are making a measurable, very significant start.”

The bill also faced criticism from several groups that receive funding from the tech industry.

NetChoice, which has sued California over its Age-Appropriate Design Code Act and whose members include Google, Meta and TikTok, said in a press release that despite lawmakers’ attempts to respond to concerns, “unfortunately, how this bill would work in practice still requires an age verification mechanism and data collection on Americans of all ages.”

“Working out how young people should use technology is a difficult question and has always been best answered by parents,” NetChoice Vice President and General Counsel Carl Szabo said in a statement. “KOSA instead creates an oversight board of DC insiders who will replace parents in deciding what’s best for children.”

“KOSA 2.0 raises more questions than it answers,” Ari Cohn, free speech counsel TechFreedom, a think tank that’s received funding from Google, said in a statement. “What constitutes reason to know that a user is under 17 is entirely unclear, and undefined by the bill. In the face of that uncertainty, platforms will clearly have to age-verify all users to avoid liability—or worse, avoid obtaining any knowledge whatsoever and leave minors without any protections at all.”

“Protecting young people online is a broadly shared goal. But it would contradict the goals of bills such as this to impose compliance obligations that undermine the privacy and safety of teens,” said Matt Schruers, president of the Computer & Communications Industry Association, whose members include Amazon, Google, Meta and Twitter. “Governments should avoid compliance requirements that would compel digital services to collect more personal information about their users — such as geolocation information and a government-issued identification — particularly when responsible companies are instituting measures to collect and store less data on customers.”

Subscribe to CNBC on YouTube.

WATCH: Sen. Blumenthal accuses Facebook of adopting Big Tobacco’s playbook

Continue Reading

Technology

Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’

Published

on

By

Figure AI sued by whistleblower who warned that startup's robots could 'fracture a human skull'

Startup Figure AI is developing general-purpose humanoid robots.

Figure AI

Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”

Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”

The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.

In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”

The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.

Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.

The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.

Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

Figure didn’t immediately respond to a request for comment. Nor did attorneys for Gruendel.

The humanoid robot market remains nascent today, with companies like Tesla and Boston Dynamics pursuing futuristic offerings, alongside Figure, while China’s Unitree Robotics is preparing for an IPO. Morgan Stanley said in a report in May that adoption is “likely to accelerate in the 2030s” and could top $5 trillion by 2050.

Read the filing here:

AI is turbocharging the evolution of humanoid robots, says Agility Robotics CEO

Continue Reading

Technology

Here are real AI stocks to invest in and speculative ones to avoid

Published

on

By

Here are real AI stocks to invest in and speculative ones to avoid

Continue Reading

Technology

The Street’s bad call on Palo Alto – plus, two portfolio stocks reach new highs

Published

on

By

The Street's bad call on Palo Alto – plus, two portfolio stocks reach new highs

Continue Reading

Trending