
AI research takes a backseat to profits as Silicon Valley prioritizes products over safety, experts say
More Videos
Published
4 months agoon
By
adminSam Altman, co-founder and CEO of OpenAI and co-founder of Tools for Humanity, participates remotely in a discussion on the sidelines of the IMF/World Bank Spring Meetings in Washington, D.C., April 24, 2025.
Brendan Smialowski | AFP | Getty Images
Not long ago, Silicon Valley was where the world’s leading artificial intelligence experts went to perform cutting-edge research.
Meta, Google and OpenAI opened their wallets for top talent, giving researchers staff, computing power and plenty of flexibility. With the support of their employers, the researchers published high-quality academic papers, openly sharing their breakthroughs with peers in academia and at rival companies.
But that era has ended. Now, experts say, AI is all about the product.
Since OpenAI released ChatGPT in late 2022, the tech industry has shifted its focus to building consumer-ready AI services, in many cases prioritizing commercialization over research, AI researchers and experts in the field told CNBC. The profit potential is massive — some analysts predict $1 trillion in annual revenue by 2028. The prospective repercussions terrify the corner of the AI universe concerned about safety, industry experts said, particularly as leading players pursue artificial general intelligence, or AGI, which is technology that rivals or exceeds human intelligence.
In the race to stay competitive, tech companies are taking an increasing number of shortcuts when it comes to the rigorous safety testing of their AI models before they are released to the public, industry experts told CNBC.
James White, chief technology officer at cybersecurity startup CalypsoAI, said newer models are sacrificing security for quality, that is, better responses by the AI chatbots. That means they’re less likely to reject malicious kinds of prompts that could cause them to reveal ways to build bombs or sensitive information that hackers could exploit, White said.
“The models are getting better, but they’re also more likely to be good at bad stuff,” said White, whose company performs safety and security audits of popular models from Meta, Google, OpenAI and other companies. “It’s easier to trick them to do bad stuff.”
The changes are readily apparent at Meta and Alphabet, which have deprioritized their AI research labs, experts say. At Facebook’s parent company, the Fundamental Artificial Intelligence Research, or FAIR, unit has been sidelined by Meta GenAI, according to current and former employees. And at Alphabet, the research group Google Brain is now part of DeepMind, the division that leads development of AI products at the tech company.
CNBC spoke with more than a dozen AI professionals in Silicon Valley who collectively tell the story of a dramatic shift in the industry away from research and toward revenue-generating products. Some are former employees at the companies with direct knowledge of what they say is the prioritization of building new AI products at the expense of research and safety checks. They say employees face intensifying development timelines, reinforcing the idea that they can’t afford to fall behind when it comes to getting new models and products to market. Some of the people asked not to be named because they weren’t authorized to speak publicly on the matter.
Mark Zuckerberg, CEO of Meta Platforms, during the Meta Connect event in Menlo Park, California, on Sept. 25, 2024.
David Paul Morris | Bloomberg | Getty Images
Meta’s AI evolution
When Joelle Pineau, a Meta vice president and the head of the company’s FAIR division, announced in April that she would be leaving her post, many former employees said they weren’t surprised. They said they viewed it as solidifying the company’s move away from AI research and toward prioritizing developing practical products.
“Today, as the world undergoes significant change, as the race for AI accelerates, and as Meta prepares for its next chapter, it is time to create space for others to pursue the work,” Pineau wrote on LinkedIn, adding that she will formally leave the company May 30.
Pineau began leading FAIR in 2023. The unit was established a decade earlier to work on difficult computer science problems typically tackled by academia. Yann LeCun, one of the godfathers of modern AI, initially oversaw the project, and instilled the research methodologies he learned from his time at the pioneering AT&T Bell Laboratories, according to several former employees at Meta. Small research teams could work on a variety of bleeding-edge projects that may or may not pan out.
The shift began when Meta laid off 21,000 employees, or nearly a quarter of its workforce, starting in late 2022. CEO Mark Zuckerberg kicked off 2023 by calling it the “year of efficiency.” FAIR researchers, as part of the cost-cutting measures, were directed to work more closely with product teams, several former employees said.
Two months before Pineau’s announcement, one of FAIR’s directors, Kim Hazelwood, left the company, two people familiar with the matter said. Hazelwood helped oversee FAIR’s NextSys unit, which manages computing resources for FAIR researchers. Her role was eliminated as part of Meta’s plan to cut 5% of its workforce, the people said.
Joelle Pineau of Meta speaks at the Advancing Sustainable Development through Safe, Secure, and Trustworthy AI event at Grand Central Terminal in New York, Sept. 23, 2024.
Bryan R. Smith | Via Reuters
OpenAI’s 2022 launch of ChatGPT caught Meta off guard, creating a sense of urgency to pour more resources into large language models, or LLMs, that were captivating the tech industry, the people said.
In 2023, Meta began heavily pushing its freely available and open-source Llama family of AI models to compete with OpenAI, Google and others.
With Zuckerberg and other executives convinced that LLMs were game-changing technologies, management had less incentive to let FAIR researchers work on far-flung projects, several former employees said. That meant deprioritizing research that could be viewed as having no impact on Meta’s core business, such as FAIR’s previous health care-related research into using AI to improve drug therapies.
Since 2024, Meta Chief Product Officer Chris Cox has been overseeing FAIR as a way to bridge the gap between research and the product-focused GenAI group, people familiar with the matter said. The GenAI unit oversees the Llama family of AI models and the Meta AI digital assistant, the two most important pillars of Meta’s AI strategy.
Under Cox, the GenAI unit has been siphoning more computing resources and team members from FAIR due to its elevated status at Meta, the people said. Many researchers have transferred to GenAI or left the company entirely to launch their own research-focused startups or join rivals, several of the former employees said.
While Zuckerberg has some internal support for pushing the GenAI group to rapidly develop real-world products, there’s also concern among some staffers that Meta is now less able to develop industry-leading breakthroughs that can be derived from experimental work, former employees said. That leaves Meta to chase its rivals.
A high-profile example landed in January, when Chinese lab DeepSeek released its R1 model, catching Meta off guard. The startup claimed it was able to develop a model as capable as its American counterparts but with training at a fraction of the cost.
Meta quickly implemented some of DeepSeek’s innovative techniques for its Llama 4 family of AI models that were released in April, former employees said. The AI research community had a mixed reaction to the smaller versions of Llama 4, but Meta said the biggest and most powerful Llama 4 variant is still being trained.
The company in April also released security and safety tools for developers to use when building apps with Meta’s Llama 4 AI models. These tools help mitigate the chances of Llama 4 unintentionally leaking sensitive information or producing harmful content, Meta said.
“Our commitment to FAIR remains strong,” a Meta spokesperson told CNBC. “Our strategy and plans will not change as a result of recent developments.”
In a statement to CNBC, Pineau said she is enthusiastic about Meta’s overall AI work and strategy.
“There continues to be strong support for exploratory research and FAIR as a distinct organization in Meta,” Pineau said. “The time was simply right for me personally to re-focus my energy before jumping into a new adventure.”
Meta on Thursday named FAIR co-founder Rob Fergus as Pineau’s replacement. Fergus will return to the company to serve as a director at Meta and head of FAIR, according to his LinkedIn profile. He was most recently a research director at Google DeepMind.
“Meta’s commitment to FAIR and long term research remains unwavering,” Fergus said in a LinkedIn post. “We’re working towards building human-level experiences that transform the way we interact with technology and are dedicated to leading and advancing AI research.”
Demis Hassabis, co-founder and CEO of Google DeepMind, attends the Artificial Intelligence Action Summit at the Grand Palais in Paris, Feb. 10, 2025.
Benoit Tessier | Reuters
Google ‘can’t keep building nanny products’
Google released its latest and most powerful AI model, Gemini 2.5, in March. The company described it as “our most intelligent AI model,” and wrote in a March 25 blog post that its new models are “capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy.”
For weeks, Gemini 2.5 was missing a model card, meaning Google did not share information about how the AI model worked or its limitations and potential dangers upon its release.
Model cards are a common tool for AI transparency.
A Google website compares model cards to food nutrition labels: They outline “the key facts about a model in a clear, digestible format,” the website says.
“By making this information easy to access, model cards support responsible AI development and the adoption of robust, industry-wide standards for broad transparency and evaluation practices,” the website says.
Google wrote in an April 2 blog post that it evaluates its “most advanced models, such as Gemini, for potential dangerous capabilities prior to their release.” Google later updated the blog to remove the words “prior to their release.”
Without a model card for Gemini 2.5, the public had no way of knowing which safety evaluations were conducted or whether DeepMind checked for dangerous capabilities at all.
In response to CNBC’s inquiry on April 2 about Gemini 2.5’s missing model card, a Google spokesperson said that a “tech report with additional safety information and model cards are forthcoming.” Google published an incomplete model card on April 16 and updated it on April 28, more than a month after the AI model’s release, to include information about Gemini 2.5’s “dangerous capability evaluations.”
Those assessments are important for gauging the safety of a model — whether people can use the models to learn how to build chemical or nuclear weapons or hack into important systems. These checks also determine whether a model is capable of autonomously replicating itself, which could lead to a company losing control of it. Running tests for those capabilities requires more time and resources than simple, automated safety evaluations, according to industry experts.
Google co-founder Sergey Brin
Kelly Sullivan | Getty Images Entertainment | Getty Images
The Financial Times in March reported that Google DeepMind CEO Demis Hassabis had installed a more rigorous vetting process for internal research papers to be published. The clampdown at Google is particularly notable because the company’s “Transformers” technology gained recognition across Silicon Valley through that type of shared research. Transformers were critical to OpenAI’s development of ChatGPT and the rise of generative AI.
Google co-founder Sergey Brin told staffers at DeepMind and Gemini in February that competition has accelerated and “the final race to AGI is afoot,” according to a memo viewed by CNBC. “We have all the ingredients to win this race but we are going to have to turbocharge our efforts,” he said in the memo.
Brin said in the memo that Google has to speed up the process of testing AI models, as the company needs “lots of ideas that we can test quickly.”
“We need real wins that scale,” Brin wrote.
In his memo, Brin also wrote that the company’s methods have “a habit of minor tweaking and overfitting” products for evaluations and “sniping” the products at checkpoints. He said employees need to build “capable products” and to “trust our users” more.
“We can’t keep building nanny products,” Brin wrote. “Our products are overrun with filters and punts of various kinds.”
A Google spokesperson told CNBC that the company has always been committed to advancing AI responsibly.
“We continue to do that through the safe development and deployment of our technology, and research contributions to the broader ecosystem,” the spokesperson said.
Sam Altman, CEO of OpenAI, is seen through glass during an event on the sidelines of the Artificial Intelligence Action Summit in Paris, Feb. 11, 2025.
Aurelien Morissard | Via Reuters
OpenAI’s rush through safety testing
The debate of product versus research is at the center of OpenAI’s existence. The company was founded as a nonprofit research lab in 2015 and is now in the midst of a contentious effort to transform into a for-profit entity.
That’s the direction co-founder and CEO Sam Altman has been pushing toward for years. On May 5, though, OpenAI bowed to pressure from civic leaders and former employees, announcing that its nonprofit would retain control of the company even as it restructures into a public benefit corporation.
Nisan Stiennon worked at OpenAI from 2018 to 2020 and was among a group of former employees urging California and Delaware not to approve OpenAI’s restructuring effort. “OpenAI may one day build technology that could get us all killed,” Stiennon wrote in a statement in April. “It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity.”
But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card.
Results of the model’s “preparedness evaluations,” the tests OpenAI runs to assess an AI model’s dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available.
Johannes Heidecke, OpenAI’s head of safety systems, told CNBC in an interview that the company ran its preparedness evaluations on near-final versions of the o1 model. Minor variations to the model that took place after those tests wouldn’t have contributed to significant jumps in its intelligence or reasoning and thus wouldn’t require additional evaluations, he said. Still, Heidecke acknowledged that OpenAI missed an opportunity to more clearly explain the difference.
OpenAI’s newest reasoning model, o3, released in April, seems to hallucinate more than twice as often as o1, according to the model card. When an AI model hallucinates, it produces falsehoods or illogical information.
OpenAI has also been criticized for reportedly slashing safety testing times from months to days and for omitting the requirement to safety test fine-tuned models in its latest “Preparedness Framework.”
Heidecke said OpenAI has decreased the time needed for safety testing because the company has improved its testing effectiveness and efficiency. A company spokesperson said OpenAI has allocated more AI infrastructure and personnel to its safety testing, and has increased resources for paying experts and growing its network of external testers.
In April, the company shipped GPT-4.1, one of its new models, without a safety report, as the model was not designated by OpenAI as a “frontier model,” which is a term used by the tech industry to refer to a bleeding-edge, large-scale AI model.
But one of those small revisions caused a big wave in April. Within days of updating its GPT-4o model, OpenAI rolled back the changes after screenshots of overly flattering responses to ChatGPT users went viral online. OpenAI said in a blog post explaining its decision that those types of responses to user inquiries “raise safety concerns — including around issues like mental health, emotional over-reliance, or risky behavior.”
OpenAI said in the blogpost that it opted to release the model even after some expert testers flagged that its behavior “‘felt’ slightly off.”
“In the end, we decided to launch the model due to the positive signals from the users who tried out the model. Unfortunately, this was the wrong call,” OpenAI wrote. “Looking back, the qualitative assessments were hinting at something important, and we should’ve paid closer attention. They were picking up on a blind spot in our other evals and metrics.”
Metr, a company OpenAI partners with to test and evaluate its models for safety, said in a recent blog post that it was given less time to test the o3 and o4-mini models than predecessors.
“Limitations in this evaluation prevent us from making robust capability assessments,” Metr wrote, adding that the tests it did were “conducted in a relatively short time.”
Metr also wrote that it had insufficient access to data that would be important in determining the potential dangers of the two models.
The company said it wasn’t able to access the OpenAI models’ internal reasoning, which is “likely to contain important information for interpreting our results.” However, Metr said, “OpenAI shared helpful information on some of their own evaluation results.”
OpenAI’s spokesperson said the company is piloting secure ways of sharing chains of thought for Metr’s research as well as for other third-party organizations.
Steven Adler, a former safety researcher at OpenAI, told CNBC that safety testing a model before it’s rolled out is no longer enough to safeguard against potential dangers.
“You need to be vigilant before and during training to reduce the chance of creating a very capable, misaligned model in the first place,” Adler said.
He warned that companies such as OpenAI are backed into a corner when they create capable but misaligned models with goals that are different from the ones they intended to build.
“Unfortunately, we don’t yet have strong scientific knowledge for fixing these models — just ways of papering over the behavior,” Adler said.
WATCH: OpenAI closes $40 billion funding round, largest private tech deal on record

You may like
Technology
Global movement to protect kids online fuels a wave of AI safety tech
Published
6 hours agoon
August 30, 2025By
admin
Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.
STR | Nurphoto via Getty Images
The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.
In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.
Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.
This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.
Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.
Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.
Digital ID tech flourishing
At the heart of all these age verification measures is one company: Yoti.
Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.
The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.
“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”
Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.
“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”
Read more CNBC tech news
Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.
“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”
Child-safe smartphones
The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.
Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.
The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.
Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.
HMD Global
“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”
The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.
Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.
The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.
“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”
Technology
‘AI may eat software,’ but several tech names just wrapped a huge week
Published
17 hours agoon
August 29, 2025By
admin
A banner for Snowflake Inc. is displayed at the New York Stock Exchange to celebrate the company’s initial public offering on Sept. 16, 2020.
Brendan McDermid | Reuters
MongoDB’s stock just closed out its best week on record, leading a rally in enterprise technology companies that are seeing tailwinds from the artificial intelligence boom.
In addition to MongoDB’s 44% rally, Pure Storage soared 33%, its second-sharpest gain ever, while Snowflake jumped 21%. Autodesk rose 8.4%.
Since generative AI started taking off in late 2022 following the launch of OpenAI’s ChatGPT, the big winners have been Nvidia, for its graphics processing units, as well as the cloud vendors like Microsoft, Google and Oracle, and companies packaging and selling GPUs, such as Dell and Super Micro Computer.
For many cloud software vendors and other enterprise tech companies, Wall Street has been waiting to see if AI will be a boon to their business, or if it might displace it.
Quarterly results this week and commentary from company executives may have eased some of those concerns, showing that the financial benefits of AI are making their way downstream.
MongoDB CEO Dev Ittycheria told CNBC’s “Squawk Box” on Wednesday that enterprise rollouts of AI services are happening, but slowly.
“You start to see deployments of agents to automate back office, maybe automate sales and marketing, but it’s still not yet kind of full force in the enterprise,” Ittycheria said. “People want to see some wins before they deploy more investment.”
Revenue at MongoDB, which sells cloud database services, rose 24% from a year earlier to $591 million, sailing past the $556 million average analyst estimate, according to LSEG. Earnings also exceeded expectations, as did the company’s full-year forecast for profit and revenue.

MongoDB said in its earnings report that it’s added more than 5,000 customers year-to-date, “the highest ever in the first half of the year.”
“We think that’s a good sign of future growth because a lot of these companies are AI native companies who are coming to MongoDB to run their business,” Ittycheria said.
Pure Storage enjoyed a record pop on Thursday, when the stock jumped 32% to an all-time high.
The data storage management vendor reported quarterly results that topped estimates and lifted its guidance for the year. But what’s exciting investors the most is early returns from Pure’s recent contract with Meta. Pure will help the social media company manage its massive storage needs efficiently with the demands of AI.
Pure said it started recognizing revenue from its Meta deployments in the second quarter, and finance chief Tarek Robbiati said on the earnings call that the company is seeing “increased interest from other hyperscalers” looking to replace their traditional storage with Pure’s technology.
‘Banger of a report’
Reports from MongoDB and Pure landed the same week that Nvidia announced quarterly earnings, and said revenue soared 56% from a year earlier, marking a ninth-straight quarter of growth in excess of 50%.
Nvidia has emerged as the world’s most-valuable company by selling advanced AI processors to all of the infrastructure providers and model developers.
While growth at Nvidia has slowed from its triple-digit rate in 2023 and 2024, it’s still expanding at a much faster pace than its megacap peers, indicating that there’s no end in sight when it comes to the expansive AI buildouts.
“It was a banger of a report,” said Brad Gerstner CEO of Altimeter Capital, in an interview with CNBC’s “Halftime Report” on Thursday. “This company is accelerating at scale.”
Read more CNBC tech news
Data analytics vendor Snowflake talked up its Snowflake AI data cloud in its quarterly earnings report on Wednesday.
Snowflake shares popped 20% following better-than-expected earnings and revenue. The company also boosted its guidance for the year for product revenue, and said it has more than 6,100 customers using Snowflake AI, up from 5,200 during the prior quarter.
“Our progress with AI has been remarkable,” Snowflake CEO Sridhar Ramaswamy said on the earnings call. “Today, AI is a core reason why customers are choosing Snowflake, influencing nearly 50% of new logos won in Q2.”
Autodesk, founded in 1982, has been around much longer than MongoDB, Pure Storage or Snowflake. The company is known for its AutoCAD software used in architecture and construction.
The company has underperformed the broader tech sector of late, and last year activist investor Starboard Value jumped into the stock to push for improvements in operations and financial performance, including cost cuts. In February, Autodesk slashed 9% of its workforce, and two months later the company settled with Starboard, adding two newcomers to its board.
The stock is still trailing the Nasdaq for the year, but climbed 9.1% on Friday after Autodesk reported results that exceeded Wall Street estimates and increased its full-year revenue guidance.
Last year, Autodesk introduced Project Bernini to develop new AI models and create what it calls “AI‑driven CAD engines.”
On Thursday’s earnings call, CEO Andrew Anagnost was asked what he’s most excited about across his company’s product portfolio when it comes to AI.
Anagnost touted the ability of Autodesk to help customers simplify workflow across products and promoted the Autodesk Assistant as a way to enhance productivity through simple prompts.
He also addressed the elephant in the room: The existential threat that AI presents.
“AI may eat software,” he said, “but it’s not gonna eat Autodesk.”
WATCH: Autodesk CEO on Q2 earnings

Technology
Meta changes teen AI chatbot responses as Senate begins probe into ‘romantic’ conversations
Published
18 hours agoon
August 29, 2025By
admin
Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.
Nathan Howard | Reuters
Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.
The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.
The company said AI chatbots will instead point teenagers to expert resources when appropriate.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.
Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.
The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.
TechCrunch was first to report the change.
Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.
Read more CNBC tech news
The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.
In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”
A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.
“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”
A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.
The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”
A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”
WATCH: Is the A.I. trade overdone?

Trending
-
Sports3 years ago
‘Storybook stuff’: Inside the night Bryce Harper sent the Phillies to the World Series
-
Sports1 year ago
Story injured on diving stop, exits Red Sox game
-
Sports2 years ago
Game 1 of WS least-watched in recorded history
-
Sports2 years ago
MLB Rank 2023: Ranking baseball’s top 100 players
-
Sports4 years ago
Team Europe easily wins 4th straight Laver Cup
-
Sports2 years ago
Button battles heat exhaustion in NASCAR debut
-
Environment2 years ago
Japan and South Korea have a lot at stake in a free and open South China Sea
-
Environment11 months ago
Here are the best electric bikes you can buy at every price level in October 2024