Connect with us

Published

on

OpenAI CEO Sam Altman speaks during the Federal Reserve’s Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., U.S., July 22, 2025.

Ken Cedeno | Reuters

OpenAI is detailing its plans to address ChatGPT’s shortcomings when handling “sensitive situations”
following a lawsuit from a family who blamed the chatbot for their teenage son’s death by suicide.

“We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable,” OpenAI wrote on Tuesday, in a blog post titled, “Helping people when they need it most.”

Earlier on Tuesday, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16, NBC News reported. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”

The company did not mention the Raine family or lawsuit in its blog post.

OpenAI said that although ChatGPT is trained to direct people to seek help when expressing suicidal intent, the chatbot tends to offer answers that go against the company’s safeguards after many messages over an extended period of time.

The company said it’s also working on an update to its GPT-5 model released earlier this month that will cause the chatbot to deescalate conversations, and that it’s exploring how to “connect people to certified therapists before they are in an acute crisis,” including possibly building a network of licensed professionals that users could reach directly through ChatGPT.

Additionally, OpenAI said it’s looking into how to connect users with “those closest to them,” like friends and family members.

When it comes to teens, OpenAI said it will soon introduce controls that will give parents options to gain more insight into how their children use ChatGPT.

Jay Edelson, lead counsel for the Raine family, told CNBC on Tuesday that nobody from OpenAI has reached out to the family directly to offer condolences or discuss any effort to improve the safety of the company’s products.

“If you’re going to use the most powerful consumer tech on the planet — you have to trust that the founders have a moral compass,” Edelson said. “That’s the question for OpenAI right now, how can anyone trust them?”

Raine’s story isn’t isolated.

Writer Laura Reiley earlier this month published an essay in The New York Times detailing how her 29-year-old daughter died by suicide after discussing the idea extensively with ChatGPT. And in a case in Florida, 14-year-old Sewell Setzer III died by suicide last year after discussing it with an AI chatbot on the app Character.AI.

As AI services grow in popularity, a host of concerns are arising around their use for therapy, companionship and other emotional needs.

But regulating the industry may also prove challenging.

On Monday, a coalition of AI companies, venture capitalists and executives, including OpenAI President and co-founder Greg Brockman announced Leading the Future, a political operation that “will oppose policies that stifle innovation” when it comes to AI.

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

WATCH: OpenAI says Musk’s filing is ‘consistent with his ongoing pattern of harassment

OpenAI says Musk's filing is 'consistent with his ongoing pattern of harassment

Continue Reading

Technology

CNBC Daily Open: Some hope after last week’s U.S. market rout

Published

on

By

CNBC Daily Open: Some hope after last week's U.S. market rout

Traders work on the floor of the New York Stock Exchange (NYSE) on Nov. 21, 2025 in New York City.

Spencer Platt | Getty Images

Last week on Wall Street, two forces dragged stocks lower: a set of high-stakes numbers from Nvidia and the U.S. jobs report that landed with more heat than expected. But the leaves that remained after hot tea scalded investors seemed to augur good tidings.

Even though Nvidia’s third-quarter results easily breezed past Wall Street’s estimates, they couldn’t quell worries about lofty valuations and an unsustainable bubble inflating in the artificial intelligence sector. The “Magnificent Seven” cohort — save Alphabethad a losing week.

The U.S. Bureau of Labor Statistics added to the pressure. September payrolls rose far more than economists expected, prompting investors to pare back their bets of a December interest rate cut. The timing didn’t help matters, as the report had been delayed and hit just as markets were already on edge.

By Friday’s close, the S&P 500 and Dow Jones Industrial Average lost roughly 2% for the week, while the Nasdaq Composite tumbled 2.7%.

Still, a flicker of hope appeared on the horizon.

On Friday, New York Federal Reserve President John Williams said that he sees “room” for the central bank to lower interest rates, describing current policy as “modestly restrictive.” His comments caused traders to increase their bets on a December cut to around 70%, up from 44.4% a week ago, according to the CME FedWatch tool.

And despite a broad sell-off in AI stocks last week, Alphabet shares bucked the trend. Investors seemed impressed by its new AI model, Gemini 3, and hopeful that its development of custom chips could rival Nvidia’s in the long run.

Meanwhile, Eli Lilly’s ascent into the $1 trillion valuation club served as a reminder that market leadership doesn’t belong to tech alone. In a market defined by narrow concentration, any sign of broadening strength is a welcome change.

Diversification, even within AI’s sprawling ecosystem, might be exactly what this market needs now.

What you need to know today

And finally…

The Beijing music venue DDC was one of the latest to have to cancel a performance by a Japanese artist on Nov. 20, 2025, in the wake of escalating bilateral tensions.

Screenshot

Japanese concerts in China are getting abruptly canceled as tensions simmer

China’s escalating dispute with Japan reinforces Beijing’s growing economic influence — and penchant for abrupt actions that can create uncertainty for businesses.

Hours before Japanese jazz quintet The Blend was due to perform in Beijing on Thursday, a plainclothesman walked into the DDC music club during a sound check. Then, “the owner of the live house came to me and said: ‘The police has told me tonight is canceled,'” said Christian Petersen-Clausen, a music agent.

— Evelyn Cheng

Correction: This report has been updated to correct the spelling of Eli Lilly.

Continue Reading

Technology

Meta halted internal research suggesting social media harm, court filing alleges

Published

on

By

Meta halted internal research suggesting social media harm, court filing alleges

Meta halted internal research that purportedly showed that people who stopped using Facebook became less depressed and anxious, according to a legal filing that was released on Friday.

The social media giant was alleged to have initiated the study, dubbed Project Mercury, in late 2019 as a way to help it “explore the impact that our apps have on polarization, news consumption, well-being, and daily social interactions,” according to the legal brief, filed in the United States District Court for the Northern District of California.

The filing contains newly unredacted information pertaining to Meta.

The newly released legal brief is related to high-profile multidistrict litigation from a variety of plaintiffs, such as school districts, parents and state attorneys general against social media companies like Meta, Google’s YouTube, Snap and TikTok.

The plaintiffs claim that these businesses were aware that their respective platforms caused various mental health-related harms to children and young adults, but failed to take action and instead misled educators and authorities, among several allegations.

“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture,” Meta spokesperson Andy Stone said in a statement. “The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens—like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences.”

A Google spokesperson said in a statement that “These lawsuits fundamentally misunderstand how YouTube works and the allegations are simply not true.”

“YouTube is a streaming service where people come to watch everything from live sports to podcasts to their favorite creators, primarily on TV screens, not a social network where people go to catch up with friends,” the Google spokesperson said. “We’ve also developed dedicated tools for young people, guided by child safety experts, that give families control.”

Snap and TikTok did not immediately respond to a request for comment.

The 2019 Meta research was based on a random sample of consumers who stopped their Facebook and Instagram usage for a month, the lawsuit said. The lawsuit alleged that Meta was disappointed that the initial tests of the study showed that people who stopped using Facebook “for a week reported lower feelings of depression, anxiety, loneliness, and social comparison.”

Meta allegedly chose not to “sound the alarm,” but instead stopped the research, the lawsuit said.

“The company never publicly disclosed the results of its deactivation study,” according to the suit. “Instead, Meta lied to Congress about what it knew.”

The lawsuit cites an unnamed Meta employee who allegedly said, “If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

Stone, in a series of social media posts, pushed back on the lawsuit’s implication that Meta shuttered the internal research after it allegedly showed a causal relationship between its apps and adverse mental-health effects.

Stone characterized the 2019 study as flawed and said it was the reason that the company expressed disappointment. The study, Stone said, merely found that “people who believed using Facebook was bad for them felt better when they stopped using it.”

“This is a confirmation of other public research (“deactivation studies”) out there that demonstrates the same effect,” Stone said in a separate post. “It makes intuitive sense but it doesn’t show anything about the actual effect of using the platform.”

CNBC’s Lora Kolodny contributed reporting.

WATCH: Final trades: Meta, S&P Global and Idexx Lab.

Continue Reading

Technology

Google’s new AI model puts OpenAI, the great conundrum of this market, on shakier ground

Published

on

By

Google's new AI model puts OpenAI, the great conundrum of this market, on shakier ground

Continue Reading

Trending