Connect with us

Published

on

Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model.  

“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman told former Fox News host Tucker Carlson in a nearly hour-long interview. 

“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, though he admitted “maybe we will get those wrong too.” 

Rather, he said he loses the most sleep over the “very small decisions” on model behavior, which can ultimately have big repercussions.

These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn’t answer. Here’s an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.

How does ChatGPT address suicide?

According to Altman, the most difficult issue the company is grappling with recently is how ChatGPT approaches suicide, in light of a lawsuit from a family who blamed the chatbot for their teenage son’s suicide.

The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up.

“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.” 

Jay Edelson on OpenAI wrongful death lawsuit: We're putting OpenAI & Sam Altman on trial, not AI

Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”

Soon after, in a blog post titled “Helping people when they need it most,” OpenAI detailed plans to address ChatGPT’s shortcomings when handling “sensitive situations,” and said it would keep improving its technology to protect people who are at their most vulnerable. 

How are ChatGPT’s ethics determined?

Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards. 

While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won’t answer. 

“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.” 

When pressed on how certain model specifications are decided, Altman said the company had consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”

An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users.

“There are clear examples of where society has an interest that is in significant tension with user freedom,” Altman said, though he added the company “won’t get everything right, and also needs the input of the world” to help make these decisions.

How private is ChatGPT?

Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for “totalitarian control.”

In response, Altman said one piece of policy he has been pushing for in Washington is “AI privilege,” which refers to the idea that anything a user says to a chatbot should be completely confidential. 

“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.” 

OpenAI CEO Sam Altman on path to profitability: Willing to run at a loss to focus on growth

According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added.

“I think I feel optimistic that we can get the government to understand the importance of this,” he said. 

Will ChatGPT be used in military operations?

Just how powerful is OpenAI?

Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a “religion.”

In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in “a huge up leveling” of all people. 

“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.”

However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.

Continue Reading

Technology

Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’

Published

on

By

Figure AI sued by whistleblower who warned that startup's robots could 'fracture a human skull'

Startup Figure AI is developing general-purpose humanoid robots.

Figure AI

Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”

Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”

The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.

In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”

The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.

Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.

The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.

Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

Figure didn’t immediately respond to a request for comment. Nor did attorneys for Gruendel.

The humanoid robot market remains nascent today, with companies like Tesla and Boston Dynamics pursuing futuristic offerings, alongside Figure, while China’s Unitree Robotics is preparing for an IPO. Morgan Stanley said in a report in May that adoption is “likely to accelerate in the 2030s” and could top $5 trillion by 2050.

Read the filing here:

AI is turbocharging the evolution of humanoid robots, says Agility Robotics CEO

Continue Reading

Technology

Here are real AI stocks to invest in and speculative ones to avoid

Published

on

By

Here are real AI stocks to invest in and speculative ones to avoid

Continue Reading

Technology

The Street’s bad call on Palo Alto – plus, two portfolio stocks reach new highs

Published

on

By

The Street's bad call on Palo Alto – plus, two portfolio stocks reach new highs

Continue Reading

Trending