Connect with us

Published

on

Jaque Silva | Nurphoto | Getty Images

A 26-year-old former OpenAI researcher, Suchir Balaji, was found dead in his San Francisco apartment in recent weeks, CNBC has confirmed.

Balaji left OpenAI earlier this year and raised concerns publicly that the company had allegedly violated U.S. copyright law while developing its popular ChatGPT chatbot.

“The manner of death has been determined to be suicide,” David Serrano Sewell, executive director of San Francisco’s Office of the Chief Medical Examiner, told CNBC in an email on Friday. He said Balaji’s next of kin have been notified.

The San Francisco Police Department said in an e-mail that on the afternoon of Nov. 26, officers were called to an apartment on Buchanan Street to conduct a “wellbeing check.” They found a deceased adult male, and discovered “no evidence of foul play” in their initial investigation, the department said.

News of Balaji’s death was first reported by the San Jose Mercury News. A family member contacted by the paper requested privacy.

In October, The New York Times published a story about Balaji’s concerns.

“If you believe what I believe, you have to just leave the company,” Balaji told the paper. He reportedly believed that ChatGPT and other chatbots like it would destroy the commercial viability of people and organizations who created the digital data and content now widely used to train AI systems.

A spokesperson for OpenAI confirmed Balaji’s death.

“We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir’s loved ones during this difficult time,” the spokesperson said in an email.

OpenAI is currently involved in legal disputes with a number of publishers, authors and artists over alleged use of copyrighted material for AI training data. A lawsuit filed by news outlets last December seeks to hold OpenAI and principal backer Microsoft accountable for billions of dollars in damages.

“We actually don’t need to train on their data,” OpenAI CEO Sam Altman said at an event organized by Bloomberg in Davos earlier this year. “I think this is something that people don’t understand. Any one particular training source, it doesn’t move the needle for us that much.”

If you are having suicidal thoughts, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

— CNBC’s Hayden Field contributed reporting.

WATCH: Why advancements in AI may be slowing

Why advancements in AI may already be slowing

Continue Reading

Technology

Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’

Published

on

By

Figure AI sued by whistleblower who warned that startup's robots could 'fracture a human skull'

Startup Figure AI is developing general-purpose humanoid robots.

Figure AI

Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”

Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”

The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.

In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”

The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.

Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.

The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.

Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

Figure didn’t immediately respond to a request for comment. Nor did attorneys for Gruendel.

The humanoid robot market remains nascent today, with companies like Tesla and Boston Dynamics pursuing futuristic offerings, alongside Figure, while China’s Unitree Robotics is preparing for an IPO. Morgan Stanley said in a report in May that adoption is “likely to accelerate in the 2030s” and could top $5 trillion by 2050.

Read the filing here:

AI is turbocharging the evolution of humanoid robots, says Agility Robotics CEO

Continue Reading

Technology

Here are real AI stocks to invest in and speculative ones to avoid

Published

on

By

Here are real AI stocks to invest in and speculative ones to avoid

Continue Reading

Technology

The Street’s bad call on Palo Alto – plus, two portfolio stocks reach new highs

Published

on

By

The Street's bad call on Palo Alto – plus, two portfolio stocks reach new highs

Continue Reading

Trending