
AI engineers report burnout and rushed rollouts as ‘rat race’ to stay competitive hits tech industry
More Videos
Published
1 year agoon
By
adminA picture shows logos of the big technology companies named GAFAM, for Google, Apple, Facebook, Amazon and Microsoft, in Mulhouse, France, on June 2, 2023.
Sebastien Bozon | AFP | Getty Images
Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday.
There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job.
But it was all for nothing. The project was ultimately “deprioritized,” the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.
The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature’s software.
AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its “iPhone moment.”
The tech workers spoke to CNBC mostly on the condition that they remain unnamed because they weren’t authorized to speak to the media. The experiences they shared illustrate a broader trend across the industry, rather than a single company’s approach to AI.
They spoke of accelerated timelines, chasing rivals’ AI announcements and an overall lack of concern from their superiors about real-world effects, themes that appear common across a broad spectrum of the biggest tech companies — from Apple to Amazon to Google.
Engineers and those with other roles in the field said an increasingly large part of their job was focused on satisfying investors and not falling behind the competition rather than solving actual problems for users. Some said they were switched over to AI teams to help support fast-paced rollouts without having adequate time to train or learn about AI, even if they are new to the technology.
A common feeling they described is burnout from immense pressure, long hours and mandates that are constantly changing. Many said their employers are looking past surveillance concerns, AI’s effect on the climate and other potential harms, all in the name of speed. Some said they or their colleagues were looking for other jobs or switching out of AI departments, due to an untenable pace.

This is the dark underbelly of the generative AI gold rush. Tech companies are racing to build chatbots, agents and image generators, and they’re spending billions of dollars training their own large language models to ensure their relevance in a market that’s predicted to top $1 trillion in revenue within a decade.
Tech’s megacap companies aren’t being shy about acknowledging to investors and employees how much AI is shaping their decision-making.
Microsoft Chief Financial Officer Amy Hood, on an earnings call earlier this year, said the software company is “repivoting our workforce toward the AI-first work we’re doing without adding material number of people to the workforce,” and said Microsoft will continue to prioritize investing in AI as “the thing that’s going to shape the next decade.”
Meta CEO Mark Zuckerberg spent much of his opening remarks on his company’s earnings call last week focused on AI products and services and the advancements in its large language model called Llama 3.
“This leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world,” Zuckerberg said.
At Amazon, CEO Andy Jassy told investors last week that the “generative AI opportunity” is almost unprecedented, and that increased capital spending is necessary to take advantage of it.
“I don’t know if any of us has seen a possibility like this in technology in a really long time, for sure since the cloud, perhaps since the Internet,” Jassy said.
Speed above everything
On the ground floor, where those investments are taking place, things can get messy.
The Amazon engineer, who lost his weekend to a project that was ultimately scuttled, said higher-ups seemed to be doing things just to “tick a checkbox,” and that speed, rather than quality, was the priority while trying to recreate products coming out of Microsoft or OpenAI.
In an emailed statement to CNBC, an Amazon spokesperson said, the company is “focused on building and deploying useful, reliable, and secure generative AI innovations that reinvent and enhance customers’ experiences,” and that Amazon is supporting its employees to “deliver those innovations.”
“It’s inaccurate and misleading to use a single employee’s anecdote to characterize the experience of all Amazon employees working in AI,” the spokesperson said.
Last year marked the beginning of the generative AI boom, following the debut of OpenAI’s ChatGPT near the end of 2022. Since then, Microsoft, Alphabet, Meta, Amazon and others have been snapping up Nvidia’s processors, which are at the core of most big AI models.
While companies such as Alphabet and Amazon continue to downsize their total headcount, they’re aggressively hiring AI experts and pouring resources into building their models and developing features for consumers and businesses.
Eric Gu, a former Apple employee who spent about four years working on AI initiatives, including for the Vision Pro headset, said that toward the end of his time at the company, he felt “boxed in.”
“Apple is a very product-focused company, so there’s this intense pressure to immediately be productive, start shipping and contributing features,” Gu said. He said that even though he was surrounded by “these brilliant people,” there was no time to really learn from them.
“It boils down to the pace at which it felt like you had to ship and perform,” said Gu, who left Apple a year ago to join AI startup Imbue, where he said he can work on equally ambitious projects but at a more measured pace.
Apple declined to comment.
Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event in San Francisco on Nov. 6, 2023.
Justin Sullivan | Getty Images
An AI engineer at Microsoft said the company is engaged in an “AI rat race.”
When it comes to ethics and safeguards, he said, Microsoft has cut corners in favor of speed, leading to rushed rollouts without sufficient concerns about what could follow. The engineer said there’s a recognition that because all of the large tech companies have access to most of the same data, there’s no real moat in AI.
Microsoft didn’t provide a comment.
Morry Kolman, an independent software engineer and digital artist who has worked on viral projects that have garnered more than 200,000 users, said that in the age of rapid advancement in AI, “it’s hard to figure out where is worth investing your time.”
“And that is very conducive to burnout just in the sense that it makes it hard to believe in something,” Kolman said, adding, “I think that the biggest thing for me is that it’s not cool or fun anymore.”
At Google, an AI team member said the burnout is the result of competitive pressure, shorter timelines and a lack of resources, particularly budget and headcount. Although many top tech companies have said they are redirecting resources to AI, the required headcount, especially on a rushed timeline, doesn’t always materialize. That is certainly the case at Google, the AI staffer said.
The company’s hurried output has led to some public embarrassment. Google Gemini’s image-generation tool was released and promptly taken offline in February after users discovered historical inaccuracies and questionable responses. In early 2023, Google employees criticized leadership, most notably CEO Sundar Pichai, for what they called a “rushed” and “botched” announcement of its initial ChatGPT competitor called Bard.
The Google AI engineer, who has over a decade of experience in tech, said she understands the pressure to move fast, given the intense competition in generative AI, but it’s all happening as the industry is in cost-cutting mode, with companies slashing their workforce to meet investor demands and “increase their bottom line,” she said.
There’s also the conference schedule. AI teams had to prepare for the Google I/O developer event in May 2023, followed by Cloud Next in August and then another Cloud Next conference in April 2024. That’s a significantly shorter gap between events than normal, and created a crunch for a team that was “beholden to conference timelines” for shipping features, the Google engineer said.
Google didn’t provide a comment for this story.
The sentiment in AI is not limited to the biggest companies.
An AI researcher at a government agency reported feeling rushed to keep up. Even though the government is notorious for moving slower than companies, the pressure “trickles down everywhere,” since everyone wants to get in on generative AI, the person said.
And it’s happening at startups.
There are companies getting funded by “really big VC firms who are expecting this 10X-like return,” said Ayodele Odubela, a data scientist and AI policy advisor.
“They’re trying to strike while the iron is hot,” she said.
‘A big pile of nonsense’
Regardless of the employer, AI workers said much of their jobs involve working on AI for the sake of AI, rather than to solve a business problem or to serve customers directly.
“A lot of times, it’s being asked to provide a solution to a problem that doesn’t exist with a tool that you don’t want to use,” independent software engineer Kolman told CNBC.
The Microsoft AI engineer said a lot of tasks are about “trying to create AI hype” with no practical use. He recalled instances when a software engineer on his team would come up with an algorithm to solve a particular problem that didn’t involve generative AI. That solution would be pushed aside in favor of one that used a large language model, even if it were less efficient, more expensive and slower, the person said. He described the irony of using an “inferior solution” just because it involved an AI model.
A software engineer at a major internet company, which the person asked to keep unnamed due to his group’s small size, said the new team he works on dedicated to AI advancement is doing large language model research “because that’s what’s hot right now.”
The engineer has worked in machine learning for years, and described much of the work in generative AI today as an “extreme amount of vaporware and hype.” Every two weeks, the engineer said, there’s some sort of big pivot, but ultimately there’s the sense that everyone is building the same thing.

He said he often has to put together demos of AI products for the company’s board of directors on three-week timelines, even though the products are “a big pile of nonsense.” There’s a constant effort to appease investors and fight for money, he said. He gave one example of building a web app to show investors even though it wasn’t related to the team’s actual work. After the presentation, “We never touched it again,” he said.
A product manager at a fintech startup said one of his projects involved a rebranding of the company’s algorithms to AI. He also worked on a ChatGPT plug-in for customers. Executives at the company never told the team why it was needed.
The employee said it felt “out of order.” The company was starting with a solution involving AI without ever defining the problem.
An AI engineer who works at a retail surveillance startup told CNBC that he’s the only AI engineer at a company of 40 people and that he handles any responsibility related to AI, which is an overwhelming task.
He said the company’s investors have inaccurate views on the capabilities of AI, often asking him to build certain things that are “impossible for me to deliver.” He said he hopes to leave for graduate school and to publish research independently.
Risky business
The Google staffer said that about six months into her role, she felt she could finally keep her head above water. Even then, she said, the pressure continued to mount, as the demands on the team were “not sustainable.”
She used the analogy of “building the plane while flying it” to describe the company’s approach to product development.
Amazon Web Services CEO Adam Selipsky speaks with Anthropic CEO and co-founder Dario Amodei during AWS re:Invent 2023, a conference hosted by Amazon Web Services, at The Venetian Las Vegas in Las Vegas on Nov. 28, 2023.
Noah Berger | Getty Images
The Amazon AI engineer expressed a similar sentiment, saying everyone on his current team was pulled into working on a product that was running behind schedule, and that many were “thrown into it” without relevant experience and onboarding.
He also said AI accuracy, and testing in general, has taken a backseat to prioritize speed of product rollouts despite “motivational speeches” from managers about how their work will “revolutionize the industry.”
Odubela underscored the ethical risks of inadequate training for AI workers and with rushing AI projects to keep up with competition. She pointed to the problems with Google Gemini’s image creator when the product hit the market in February. In one instance, a user asked Gemini to show a German soldier in 1943, and the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, according to screenshots viewed by CNBC.
“The biggest piece that’s missing is lacking the ability to work with domain experts on projects, and the ability to even evaluate them as stringently as they should be evaluated before release,” Odubela said, regarding the current ethos in AI.
At a moment in technology when thoughtfulness is more important than ever, some of the leading companies appear to be doing the opposite.
“I think the major harm that comes is there’s no time to think critically,” Odubela said.
Don’t miss these exclusives from CNBC PRO

You may like
Technology
AI research takes a backseat to profits as Silicon Valley prioritizes products over safety, experts say
Published
5 days agoon
May 14, 2025By
admin
Sam Altman, co-founder and CEO of OpenAI and co-founder of Tools for Humanity, participates remotely in a discussion on the sidelines of the IMF/World Bank Spring Meetings in Washington, D.C., April 24, 2025.
Brendan Smialowski | AFP | Getty Images
Not long ago, Silicon Valley was where the world’s leading artificial intelligence experts went to perform cutting-edge research.
Meta, Google and OpenAI opened their wallets for top talent, giving researchers staff, computing power and plenty of flexibility. With the support of their employers, the researchers published high-quality academic papers, openly sharing their breakthroughs with peers in academia and at rival companies.
But that era has ended. Now, experts say, AI is all about the product.
Since OpenAI released ChatGPT in late 2022, the tech industry has shifted its focus to building consumer-ready AI services, in many cases prioritizing commercialization over research, AI researchers and experts in the field told CNBC. The profit potential is massive — some analysts predict $1 trillion in annual revenue by 2028. The prospective repercussions terrify the corner of the AI universe concerned about safety, industry experts said, particularly as leading players pursue artificial general intelligence, or AGI, which is technology that rivals or exceeds human intelligence.
In the race to stay competitive, tech companies are taking an increasing number of shortcuts when it comes to the rigorous safety testing of their AI models before they are released to the public, industry experts told CNBC.
James White, chief technology officer at cybersecurity startup CalypsoAI, said newer models are sacrificing security for quality, that is, better responses by the AI chatbots. That means they’re less likely to reject malicious kinds of prompts that could cause them to reveal ways to build bombs or sensitive information that hackers could exploit, White said.
“The models are getting better, but they’re also more likely to be good at bad stuff,” said White, whose company performs safety and security audits of popular models from Meta, Google, OpenAI and other companies. “It’s easier to trick them to do bad stuff.”
The changes are readily apparent at Meta and Alphabet, which have deprioritized their AI research labs, experts say. At Facebook’s parent company, the Fundamental Artificial Intelligence Research, or FAIR, unit has been sidelined by Meta GenAI, according to current and former employees. And at Alphabet, the research group Google Brain is now part of DeepMind, the division that leads development of AI products at the tech company.
CNBC spoke with more than a dozen AI professionals in Silicon Valley who collectively tell the story of a dramatic shift in the industry away from research and toward revenue-generating products. Some are former employees at the companies with direct knowledge of what they say is the prioritization of building new AI products at the expense of research and safety checks. They say employees face intensifying development timelines, reinforcing the idea that they can’t afford to fall behind when it comes to getting new models and products to market. Some of the people asked not to be named because they weren’t authorized to speak publicly on the matter.
Mark Zuckerberg, CEO of Meta Platforms, during the Meta Connect event in Menlo Park, California, on Sept. 25, 2024.
David Paul Morris | Bloomberg | Getty Images
Meta’s AI evolution
When Joelle Pineau, a Meta vice president and the head of the company’s FAIR division, announced in April that she would be leaving her post, many former employees said they weren’t surprised. They said they viewed it as solidifying the company’s move away from AI research and toward prioritizing developing practical products.
“Today, as the world undergoes significant change, as the race for AI accelerates, and as Meta prepares for its next chapter, it is time to create space for others to pursue the work,” Pineau wrote on LinkedIn, adding that she will formally leave the company May 30.
Pineau began leading FAIR in 2023. The unit was established a decade earlier to work on difficult computer science problems typically tackled by academia. Yann LeCun, one of the godfathers of modern AI, initially oversaw the project, and instilled the research methodologies he learned from his time at the pioneering AT&T Bell Laboratories, according to several former employees at Meta. Small research teams could work on a variety of bleeding-edge projects that may or may not pan out.
The shift began when Meta laid off 21,000 employees, or nearly a quarter of its workforce, starting in late 2022. CEO Mark Zuckerberg kicked off 2023 by calling it the “year of efficiency.” FAIR researchers, as part of the cost-cutting measures, were directed to work more closely with product teams, several former employees said.
Two months before Pineau’s announcement, one of FAIR’s directors, Kim Hazelwood, left the company, two people familiar with the matter said. Hazelwood helped oversee FAIR’s NextSys unit, which manages computing resources for FAIR researchers. Her role was eliminated as part of Meta’s plan to cut 5% of its workforce, the people said.
Joelle Pineau of Meta speaks at the Advancing Sustainable Development through Safe, Secure, and Trustworthy AI event at Grand Central Terminal in New York, Sept. 23, 2024.
Bryan R. Smith | Via Reuters
OpenAI’s 2022 launch of ChatGPT caught Meta off guard, creating a sense of urgency to pour more resources into large language models, or LLMs, that were captivating the tech industry, the people said.
In 2023, Meta began heavily pushing its freely available and open-source Llama family of AI models to compete with OpenAI, Google and others.
With Zuckerberg and other executives convinced that LLMs were game-changing technologies, management had less incentive to let FAIR researchers work on far-flung projects, several former employees said. That meant deprioritizing research that could be viewed as having no impact on Meta’s core business, such as FAIR’s previous health care-related research into using AI to improve drug therapies.
Since 2024, Meta Chief Product Officer Chris Cox has been overseeing FAIR as a way to bridge the gap between research and the product-focused GenAI group, people familiar with the matter said. The GenAI unit oversees the Llama family of AI models and the Meta AI digital assistant, the two most important pillars of Meta’s AI strategy.
Under Cox, the GenAI unit has been siphoning more computing resources and team members from FAIR due to its elevated status at Meta, the people said. Many researchers have transferred to GenAI or left the company entirely to launch their own research-focused startups or join rivals, several of the former employees said.
While Zuckerberg has some internal support for pushing the GenAI group to rapidly develop real-world products, there’s also concern among some staffers that Meta is now less able to develop industry-leading breakthroughs that can be derived from experimental work, former employees said. That leaves Meta to chase its rivals.
A high-profile example landed in January, when Chinese lab DeepSeek released its R1 model, catching Meta off guard. The startup claimed it was able to develop a model as capable as its American counterparts but with training at a fraction of the cost.
Meta quickly implemented some of DeepSeek’s innovative techniques for its Llama 4 family of AI models that were released in April, former employees said. The AI research community had a mixed reaction to the smaller versions of Llama 4, but Meta said the biggest and most powerful Llama 4 variant is still being trained.
The company in April also released security and safety tools for developers to use when building apps with Meta’s Llama 4 AI models. These tools help mitigate the chances of Llama 4 unintentionally leaking sensitive information or producing harmful content, Meta said.
“Our commitment to FAIR remains strong,” a Meta spokesperson told CNBC. “Our strategy and plans will not change as a result of recent developments.”
In a statement to CNBC, Pineau said she is enthusiastic about Meta’s overall AI work and strategy.
“There continues to be strong support for exploratory research and FAIR as a distinct organization in Meta,” Pineau said. “The time was simply right for me personally to re-focus my energy before jumping into a new adventure.”
Meta on Thursday named FAIR co-founder Rob Fergus as Pineau’s replacement. Fergus will return to the company to serve as a director at Meta and head of FAIR, according to his LinkedIn profile. He was most recently a research director at Google DeepMind.
“Meta’s commitment to FAIR and long term research remains unwavering,” Fergus said in a LinkedIn post. “We’re working towards building human-level experiences that transform the way we interact with technology and are dedicated to leading and advancing AI research.”
Demis Hassabis, co-founder and CEO of Google DeepMind, attends the Artificial Intelligence Action Summit at the Grand Palais in Paris, Feb. 10, 2025.
Benoit Tessier | Reuters
Google ‘can’t keep building nanny products’
Google released its latest and most powerful AI model, Gemini 2.5, in March. The company described it as “our most intelligent AI model,” and wrote in a March 25 blog post that its new models are “capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy.”
For weeks, Gemini 2.5 was missing a model card, meaning Google did not share information about how the AI model worked or its limitations and potential dangers upon its release.
Model cards are a common tool for AI transparency.
A Google website compares model cards to food nutrition labels: They outline “the key facts about a model in a clear, digestible format,” the website says.
“By making this information easy to access, model cards support responsible AI development and the adoption of robust, industry-wide standards for broad transparency and evaluation practices,” the website says.
Google wrote in an April 2 blog post that it evaluates its “most advanced models, such as Gemini, for potential dangerous capabilities prior to their release.” Google later updated the blog to remove the words “prior to their release.”
Without a model card for Gemini 2.5, the public had no way of knowing which safety evaluations were conducted or whether DeepMind checked for dangerous capabilities at all.
In response to CNBC’s inquiry on April 2 about Gemini 2.5’s missing model card, a Google spokesperson said that a “tech report with additional safety information and model cards are forthcoming.” Google published an incomplete model card on April 16 and updated it on April 28, more than a month after the AI model’s release, to include information about Gemini 2.5’s “dangerous capability evaluations.”
Those assessments are important for gauging the safety of a model — whether people can use the models to learn how to build chemical or nuclear weapons or hack into important systems. These checks also determine whether a model is capable of autonomously replicating itself, which could lead to a company losing control of it. Running tests for those capabilities requires more time and resources than simple, automated safety evaluations, according to industry experts.
Google co-founder Sergey Brin
Kelly Sullivan | Getty Images Entertainment | Getty Images
The Financial Times in March reported that Google DeepMind CEO Demis Hassabis had installed a more rigorous vetting process for internal research papers to be published. The clampdown at Google is particularly notable because the company’s “Transformers” technology gained recognition across Silicon Valley through that type of shared research. Transformers were critical to OpenAI’s development of ChatGPT and the rise of generative AI.
Google co-founder Sergey Brin told staffers at DeepMind and Gemini in February that competition has accelerated and “the final race to AGI is afoot,” according to a memo viewed by CNBC. “We have all the ingredients to win this race but we are going to have to turbocharge our efforts,” he said in the memo.
Brin said in the memo that Google has to speed up the process of testing AI models, as the company needs “lots of ideas that we can test quickly.”
“We need real wins that scale,” Brin wrote.
In his memo, Brin also wrote that the company’s methods have “a habit of minor tweaking and overfitting” products for evaluations and “sniping” the products at checkpoints. He said employees need to build “capable products” and to “trust our users” more.
“We can’t keep building nanny products,” Brin wrote. “Our products are overrun with filters and punts of various kinds.”
A Google spokesperson told CNBC that the company has always been committed to advancing AI responsibly.
“We continue to do that through the safe development and deployment of our technology, and research contributions to the broader ecosystem,” the spokesperson said.
Sam Altman, CEO of OpenAI, is seen through glass during an event on the sidelines of the Artificial Intelligence Action Summit in Paris, Feb. 11, 2025.
Aurelien Morissard | Via Reuters
OpenAI’s rush through safety testing
The debate of product versus research is at the center of OpenAI’s existence. The company was founded as a nonprofit research lab in 2015 and is now in the midst of a contentious effort to transform into a for-profit entity.
That’s the direction co-founder and CEO Sam Altman has been pushing toward for years. On May 5, though, OpenAI bowed to pressure from civic leaders and former employees, announcing that its nonprofit would retain control of the company even as it restructures into a public benefit corporation.
Nisan Stiennon worked at OpenAI from 2018 to 2020 and was among a group of former employees urging California and Delaware not to approve OpenAI’s restructuring effort. “OpenAI may one day build technology that could get us all killed,” Stiennon wrote in a statement in April. “It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity.”
But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card.
Results of the model’s “preparedness evaluations,” the tests OpenAI runs to assess an AI model’s dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available.
Johannes Heidecke, OpenAI’s head of safety systems, told CNBC in an interview that the company ran its preparedness evaluations on near-final versions of the o1 model. Minor variations to the model that took place after those tests wouldn’t have contributed to significant jumps in its intelligence or reasoning and thus wouldn’t require additional evaluations, he said. Still, Heidecke acknowledged that OpenAI missed an opportunity to more clearly explain the difference.
OpenAI’s newest reasoning model, o3, released in April, seems to hallucinate more than twice as often as o1, according to the model card. When an AI model hallucinates, it produces falsehoods or illogical information.
OpenAI has also been criticized for reportedly slashing safety testing times from months to days and for omitting the requirement to safety test fine-tuned models in its latest “Preparedness Framework.”
Heidecke said OpenAI has decreased the time needed for safety testing because the company has improved its testing effectiveness and efficiency. A company spokesperson said OpenAI has allocated more AI infrastructure and personnel to its safety testing, and has increased resources for paying experts and growing its network of external testers.
In April, the company shipped GPT-4.1, one of its new models, without a safety report, as the model was not designated by OpenAI as a “frontier model,” which is a term used by the tech industry to refer to a bleeding-edge, large-scale AI model.
But one of those small revisions caused a big wave in April. Within days of updating its GPT-4o model, OpenAI rolled back the changes after screenshots of overly flattering responses to ChatGPT users went viral online. OpenAI said in a blog post explaining its decision that those types of responses to user inquiries “raise safety concerns — including around issues like mental health, emotional over-reliance, or risky behavior.”
OpenAI said in the blogpost that it opted to release the model even after some expert testers flagged that its behavior “‘felt’ slightly off.”
“In the end, we decided to launch the model due to the positive signals from the users who tried out the model. Unfortunately, this was the wrong call,” OpenAI wrote. “Looking back, the qualitative assessments were hinting at something important, and we should’ve paid closer attention. They were picking up on a blind spot in our other evals and metrics.”
Metr, a company OpenAI partners with to test and evaluate its models for safety, said in a recent blog post that it was given less time to test the o3 and o4-mini models than predecessors.
“Limitations in this evaluation prevent us from making robust capability assessments,” Metr wrote, adding that the tests it did were “conducted in a relatively short time.”
Metr also wrote that it had insufficient access to data that would be important in determining the potential dangers of the two models.
The company said it wasn’t able to access the OpenAI models’ internal reasoning, which is “likely to contain important information for interpreting our results.” However, Metr said, “OpenAI shared helpful information on some of their own evaluation results.”
OpenAI’s spokesperson said the company is piloting secure ways of sharing chains of thought for Metr’s research as well as for other third-party organizations.
Steven Adler, a former safety researcher at OpenAI, told CNBC that safety testing a model before it’s rolled out is no longer enough to safeguard against potential dangers.
“You need to be vigilant before and during training to reduce the chance of creating a very capable, misaligned model in the first place,” Adler said.
He warned that companies such as OpenAI are backed into a corner when they create capable but misaligned models with goals that are different from the ones they intended to build.
“Unfortunately, we don’t yet have strong scientific knowledge for fixing these models — just ways of papering over the behavior,” Adler said.
WATCH: OpenAI closes $40 billion funding round, largest private tech deal on record

Technology
Stock trading app eToro pops 40% in Nasdaq debut after pricing IPO above expected range
Published
5 days agoon
May 14, 2025By
admin
Omar Marques | Sopa Images | Lightrocket | Getty Images
Shares of stock brokerage platform eToro popped in their Nasdaq debut on Wednesday after the company raised almost $310 million in its initial public offering.
The stock opened at $69.69, or 34% above its IPO, pushing its market cap to $5.6 billion. Shares were last up more than 40%.
The Israel-based company sold nearly six million shares at $52 each, above the expected range of $46 to $50. Almost six million additional shares were sold by existing investors. At the IPO price, the company was valued at roughly $4.2 billion.
Wall Street is looking to the Robinhood competitor for signs of renewed interest in IPOs after an extended drought. Many investors saw President Donald Trump’s return to the White House as a catalyst before tariff concerns led companies to delay their plans.
Etoro isn’t the only company attempting to test the waters. Fintech company Chime filed its prospectus with the U.S. Securities and Exchange Commission on Tuesday, while digital physical therapy company Hinge Health kickstarted its IPO roadshow, and said in a filing it aims to raise up to $437 million in its offering.
EToro had previously filed to go public in 2021 through a merger with a special purpose acquisition company, or SPAC, that would have valued it at more than $10 billion. It shelved those plans in 2022 as equity markets nosedived, but remained focused on an eventual IPO.
EToro was founded in 2007 by brothers Yoni and Ronen Assia and David Ring. The company makes money through trading-related fees and nontrading activities such as withdrawals. Net income increased almost thirteenfold last year to $192.4 million from $15.3 million in 2023.
The company has steadily built a growing business in cryptocurrencies. Revenue from crypto assets more than tripled to upward of $12 million in 2024, and one-quarter of its net trading contribution stemmed from crypto last year. That is up from 10% in 2023.
EToro said that for the first quarter, it expects crypto assets to account for 37% of its commission from trading activities, down from 43% a year earlier.
Spark Capital is the company’s biggest outside investor, with 14% control after the offering, followed by BRM Group at 8.7%. CEO Yoni Assia controls 9.3%.
Read more CNBC tech news

Technology
5 new Uber features you should know — including a way to avoid surge pricing
Published
5 days agoon
May 14, 2025By
admin
Travelers walk past a sign pointing toward the Uber ride-share vehicle pickup area at Los Angeles International Airport in Los Angeles on Feb. 8, 2023.
Mario Tama | Getty Images
Uber is giving commuters new ways to travel and cut costs on frequent rides.
The ride-hailing company on Wednesday announced a route share feature on its platform, prepaid ride passes and special deals week for Uber One members at its annual Go-Get showcase.
Uber’s new features come as the company accelerates its leadership position in the ride-sharing market and seeks to offer more affordable alternatives for users. It also follows last week’s first-quarter earnings as Uber swung to a profit but fell short of revenue estimates.
“The goal for us as we build our products is to put people at the center of everything, and right now for us, it means making things a little easier, a little more predictable, and above all, just a little more — or a lot more — affordable,” said Uber CEO Dara Khosrowshahi at the event.
Here are some of the big announcements from the annual product event.
Route Share
Users looking to save money on regular routes and willing to walk a short distance can select a shared ride with up to two other passengers through the new route-share feature.
The prepopulated routes run every 20 minutes along busy areas between 6 a.m. and 10 a.m. and 4 p.m. and 8 p.m. on weekdays. The initial program is slated to kick off in seven cities, including New York, San Francisco, Boston and Chicago.
Source: Uber
Uber said its new route-share fares will cost up to 50% less than an UberX option, and that it is working to partner with employers on qualifying the feature for commuter benefits. Users can book a seat from 7 days to 10 minutes before a pickup departure.
Ride Passes
Riders on Uber can now prepurchase two different types of ride passes to hold fares on frequented routes during a one-hour period every day. For $2.99 a month, riders can buy a price lock pass that holds a price between two locations for one hour every day. The pass expires after 30 days or a savings total of $50.
The feature gives riders a way to avoid surge pricing.
Ride Passes roll out in 10 cities on Wednesday, including Dallas, Orlando and San Francisco, and can be purchased for up to 10 routes a month. Uber will charge users a lower price if the fare is cheaper than the pass at departure time.
The company also debuted a prepaid pass option, allowing users to pay in advance and stock up on regular monthly trips. Uber’s pass option comes in bundles of 5, 10, 15 and 20-ride increments, with corresponding discounts between 5% and 20%.
Both pass options will be available on teen accounts in the fall, Uber said. The route share and ride passes will be available in a new commuter hub feature on the app coming later this year.
Shared autonomous rides
Uber is also expanding its autonomous vehicle partnership with Volkswagen.
The company will start testing shared AV rides later this year and is aiming for a launch in Los Angeles in 2026.
Uber rolled out autonomous rides in Austin, Texas, in March through its agreement with Alphabet-owned Waymo and is preparing for an Atlanta launch this summer. The company announced the partnership in May 2023. Autonomous Waymo rides are also currently offered through the Uber app in Phoenix, but the company does not directly manage that fleet.
Khosrowshahi called AVs “the single greatest opportunity ahead for Uber” during the company’s earnings call last week and said the Austin debut “exceeded” expectations. The company previously had an AV unit that it sold in 2020 as it faced high costs and a series of safety challenges, including a fatal accident.
Along with Volkswagen and Waymo, Uber has joined forces with Avride, May Mobility and self-driving trucking company Aurora for autonomous ride-sharing and freight services in the U.S. The company has partnerships with WeRide, Pony.AI and Momenta internationally.
Uber One Member Days
Uber is taking a page out of Amazon’s book by offering its own variation of the e-commerce giant’s beloved Prime Day, with special offers between May 16 and 23 for Uber One members.
Some of those deals include 50% off shared rides and 20% off Uber Black. The platform is also adding a new benefit of 10% back in Uber credits for users that use Uber Rent or book Lime rides.
UberEats partnership with OpenTable
UberEats also announced a partnership with OpenTable to allow users to book reservations and rides.
The new feature, powered by OpenTable, launches in six countries including the U.S. and Australia.
Through the partnership, users can book restaurant reservations and get a discount on rides. OpenTable members will also be able to transfer points to Uber and UberEats. The company is also offering OpenTable VIPs a six-month free trial of Uber One.
Read more CNBC tech news
Trending
-
Sports3 years ago
‘Storybook stuff’: Inside the night Bryce Harper sent the Phillies to the World Series
-
Sports1 year ago
Story injured on diving stop, exits Red Sox game
-
Sports2 years ago
Game 1 of WS least-watched in recorded history
-
Sports2 years ago
MLB Rank 2023: Ranking baseball’s top 100 players
-
Sports4 years ago
Team Europe easily wins 4th straight Laver Cup
-
Environment2 years ago
Japan and South Korea have a lot at stake in a free and open South China Sea
-
Environment2 years ago
Game-changing Lectric XPedition launched as affordable electric cargo bike
-
Business3 years ago
Bank of England’s extraordinary response to government policy is almost unthinkable | Ed Conway