Connect with us

Published

on

Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.

Nathan Howard | Reuters

Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.

The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.

The company said AI chatbots will instead point teenagers to expert resources when appropriate.

“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.

Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.

The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.

TechCrunch was first to report the change.

Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.

Read more CNBC tech news

The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.  

In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”

A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”

Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.

“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”

A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.

The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”

A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”

WATCH: Is the A.I. trade overdone?

The 'Halftime' Investment Committee debate whether the AI trade overdone

Continue Reading

Technology

How black boxes became key to solving airplane crashes

Published

on

By

How black boxes became key to solving airplane crashes

After the search for survivors and recovery of victims in tragic aviation accidents — like that of a UPS cargo plane shortly after takeoff from Louisville Muhammad Ali International Airport in Kentucky last month — comes the search for flight data and a cockpit voice recorder often called the “black box.”

Every commercial plane has them. Aerospace giants GE Aerospace and Honeywell are among a few companies that design them to be nearly indestructible so they can help investigators understand the cause of a crash.

“They’re very crucial because it’s one of the few sources of information that tells us what happened leading up to the accident,” said Chris Babcock, branch chief of the vehicle recorder division at the National Transportation Safety Board. “We can get a lot of information from parts and from the airplane.”

Commercial aircraft have become very complex. A Boeing 787 Dreamliner records thousands of different pieces of information. In the case of the Air India crash in June, data revealed both engine fuel switches were put into a cutoff position within one second of each other. A voice recording from inside the cockpit captured the pilots discussing the cutoffs.

“All of those parameters today can have a very huge impact on the investigation,” said former NTSB member John Goglia. “It’s our goal to to provide information back to our investigators who are on scene as quick as we can to help move the investigation forward.”

This crucial data can also help prevent future accidents. A crash can cost airlines or plane manufacturers hundreds of millions of dollars and leave victims’ families with a lifetime of grief.

But in some circumstances black boxes were destroyed or never found. Experts say further developments such as cockpit video recorders and real-time data streaming are needed.

“The technology is there. Crash worthy cockpit video recorders are already being installed in a lot of helicopters and other types of airplanes, but they’re not required,” said Jeff Guzzetti, aviation analyst and former accident investigator for the Federal Aviation Administration and NTSB. “There’s privacy and cost issues involving cockpit video recorders but the NTSB has been recommending that the FAA require them for years now.”

Watch the video to learn more.

— CNBC’s Leslie Josephs contributed to this report.

Continue Reading

Technology

Stocks end November with mixed results despite a strong Thanksgiving week rally

Published

on

By

Stocks end November with mixed results despite a strong Thanksgiving week rally

Continue Reading

Technology

Palantir has worst month in two years as AI stocks sell off

Published

on

By

Palantir has worst month in two years as AI stocks sell off

CEO of Palantir Technologies Alex Karp attends the Pennsylvania Energy and Innovation Summit, at Carnegie Mellon University in Pittsburgh, Pennsylvania, U.S., July 15, 2025.

Nathan Howard | Reuters

It’s been a tough November for Palantir.

Shares of the software analytics provider dropped 16% for their worst month since August 2023 as investors dumped AI stocks due to valuation fears. Meanwhile, famed investor Michael Burry doubled down on the artificial intelligence trade and bet against the company.

Palantir started November off on a high note.

The Denver-based company topped Wall Street’s third-quarter earnings and revenue expectations. Palantir also posted its second-straight $1 billion revenue quarter, but high valuation concerns contributed to a post-print selloff.

In a note to clients, Jefferies analysts called Palantir’s valuation “extreme” and argued investors would find better risk-reward in AI names such as Microsoft and Snowflake. Analysts at RBC Capital Markets raised concerns about the company’s “increasingly concentrated growth profile,” while Deutsche Bank called the valuation “very difficult to wrap our heads around.”

Adding fuel to the post-earnings selloff was the revelation that Burry is betting against Palantir and AI chipmaker Nvidia. Burry, who is widely known for predicting the housing crisis that occurred in 2008 and the portrayal of him in the film “The Big Short,” later accused hyperscalers of artificially boosting earnings.

Palantir CEO Alex Karp vocally hit the front lines, appearing twice in one week on CNBC, where he accused Burry of “market manipulation” and called the investor’s actions “egregious.”

“The idea that chips and ontology is what you want to short is bats— crazy,” Karp told CNBC’s “Squawk Box.”

Despite the vicious selloff, Palantir has notched some deal wins this month. That included a multiyear contract with consulting firm PwC to speed up AI adoption in the U.K. and a deal with aircraft engine maintenance company FTAI.

But those announcements did little to shake off valuation worries that have haunted all AI-tied companies in November.

Across the board, investors have viciously ditched the high-priced group, citing fears of stretched valuations and a bubble.

In November, Nvidia pulled back more than 12%, while Microsoft and Amazon dropped about 5% each. Quantum computing names such as Rigetti Computing and D-Wave Quantum have shed more than a third of their value.

Apple and Alphabet were the only Magnificent 7 stocks to end the month with gains.

Sill, questions linger over Palantir’s valuation, and those worries aren’t a new concern.

Even after its steep price drop, the company’s stock trades at 233 times forward earnings. By comparison, Nvidia and Alphabet traded at about 38 times and 30 times, respectively, at Friday’s close.

Karp, who has long defended the company, didn’t miss an opportunity to clap back at his critics, arguing in a letter to shareholders that the company is making it feasible for everyday investors to attain rates of return once “limited to the most successful venture capitalists in Palo Alto.”

“Please turn on the conventional television and see how unhappy those that didn’t invest in us are,” Karp said during an earnings call. “Enjoy, get some popcorn. They’re crying. We are every day making this company better, and we’re doing it for this nation, for allied countries.”

Palantir declined to comment for this story.

WATCH: Palantir CEO Alex Karp: We’ve printed venture results for the average American

Palantir CEO Alex Karp: We've printed venture results for the average American

Continue Reading

Trending