A picture shows logos of the big technology companies named GAFAM, for Google, Apple, Facebook, Amazon and Microsoft, in Mulhouse, France, on June 2, 2023.
Sebastien Bozon | AFP | Getty Images
Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday.
There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job.
But it was all for nothing. The project was ultimately “deprioritized,” the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.
The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature’s software.
AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its “iPhone moment.”
The tech workers spoke to CNBC mostly on the condition that they remain unnamed because they weren’t authorized to speak to the media. The experiences they shared illustrate a broader trend across the industry, rather than a single company’s approach to AI.
They spoke of accelerated timelines, chasing rivals’ AI announcements and an overall lack of concern from their superiors about real-world effects, themes that appear common across a broad spectrum of the biggest tech companies — from Apple to Amazon to Google.
Engineers and those with other roles in the field said an increasingly large part of their job was focused on satisfying investors and not falling behind the competition rather than solving actual problems for users. Some said they were switched over to AI teams to help support fast-paced rollouts without having adequate time to train or learn about AI, even if they are new to the technology.
A common feeling they described is burnout from immense pressure, long hours and mandates that are constantly changing. Many said their employers are looking past surveillance concerns, AI’s effect on the climate and other potential harms, all in the name of speed. Some said they or their colleagues were looking for other jobs or switching out of AI departments, due to an untenable pace.
This is the dark underbelly of the generative AI gold rush. Tech companies are racing to build chatbots, agents and image generators, and they’re spending billions of dollars training their own large language models to ensure their relevance in a market that’s predicted to top $1 trillion in revenue within a decade.
Tech’s megacap companies aren’t being shy about acknowledging to investors and employees how much AI is shaping their decision-making.
Microsoft Chief Financial Officer Amy Hood, on an earnings call earlier this year, said the software company is “repivoting our workforce toward the AI-first work we’re doing without adding material number of people to the workforce,” and said Microsoft will continue to prioritize investing in AI as “the thing that’s going to shape the next decade.”
“This leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world,” Zuckerberg said.
At Amazon, CEO Andy Jassy told investors last week that the “generative AI opportunity” is almost unprecedented, and that increased capital spending is necessary to take advantage of it.
“I don’t know if any of us has seen a possibility like this in technology in a really long time, for sure since the cloud, perhaps since the Internet,” Jassy said.
Speed above everything
On the ground floor, where those investments are taking place, things can get messy.
The Amazon engineer, who lost his weekend to a project that was ultimately scuttled, said higher-ups seemed to be doing things just to “tick a checkbox,”and that speed, rather than quality, was the priority while trying to recreate products coming out of Microsoft or OpenAI.
In an emailed statement to CNBC, an Amazon spokesperson said, the company is “focused on building and deploying useful, reliable, and secure generative AI innovations that reinvent and enhance customers’ experiences,” and that Amazon is supporting its employees to “deliver those innovations.”
“It’s inaccurate and misleading to use a single employee’s anecdote to characterize the experience of all Amazon employees working in AI,” the spokesperson said.
Last year marked the beginning of the generative AI boom, following the debut of OpenAI’s ChatGPT near the end of 2022. Since then, Microsoft, Alphabet, Meta, Amazon and others have been snapping up Nvidia’s processors, which are at the core of most big AI models.
While companies such as Alphabet and Amazon continue to downsize their total headcount, they’re aggressively hiring AI experts and pouring resources into building their models and developing features for consumers and businesses.
Eric Gu, a former Apple employee who spent about four years working on AI initiatives, including for the Vision Pro headset, said that toward the end of his time at the company, he felt “boxed in.”
“Apple is a very product-focused company, so there’s this intense pressure to immediately be productive, start shipping and contributing features,” Gu said. He said that even though he was surrounded by “these brilliant people,” there was no time to really learn from them.
“It boils down to the pace at which it felt like you had to ship and perform,” said Gu, who left Apple a year ago to join AI startup Imbue, where he said he can work on equally ambitious projects but at a more measured pace.
Apple declined to comment.
Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event in San Francisco on Nov. 6, 2023.
Justin Sullivan | Getty Images
An AI engineer at Microsoft said the company is engaged in an “AI rat race.”
When it comes to ethics and safeguards, he said, Microsoft has cut corners in favor of speed, leading to rushed rollouts without sufficient concerns about what could follow. The engineer said there’s a recognition that because all of the large tech companies have access to most of the same data, there’s no real moat in AI.
Microsoft didn’t provide a comment.
Morry Kolman, an independent software engineer and digital artist who has worked on viral projects that have garnered more than 200,000 users, said that in the age of rapid advancement in AI, “it’s hard to figure out where is worth investing your time.”
“And that is very conducive to burnout just in the sense that it makes it hard to believe in something,” Kolman said, adding, “I think that the biggest thing for me is that it’s not cool or fun anymore.”
At Google, an AI team member said the burnout is the result of competitive pressure, shorter timelines and a lack of resources, particularly budget and headcount. Although many top tech companies have said they are redirecting resources to AI, the required headcount, especially on a rushed timeline, doesn’t always materialize. That is certainly the case at Google, the AI staffer said.
The company’s hurried output has led to some public embarrassment. Google Gemini’s image-generation tool was released and promptly taken offline in February after users discovered historical inaccuracies and questionable responses. In early 2023, Google employees criticized leadership, most notably CEO Sundar Pichai, for what they called a “rushed” and “botched” announcement of its initial ChatGPT competitor called Bard.
The Google AI engineer, who has over a decade of experience in tech, said she understands the pressure to move fast, given the intense competition in generative AI, but it’s all happening as the industry is in cost-cutting mode, with companies slashing their workforce to meet investor demands and “increase their bottom line,” she said.
There’s also the conference schedule. AI teams had to prepare for the Google I/O developer event in May 2023, followed by Cloud Next in August and then another Cloud Next conference in April 2024. That’s a significantly shorter gap between events than normal, and created a crunch for a team that was “beholden to conference timelines” for shipping features, the Google engineer said.
Google didn’t provide a comment for this story.
The sentiment in AI is not limited to the biggest companies.
An AI researcher at a government agency reported feeling rushed to keep up. Even though the government is notorious for moving slower than companies, the pressure “trickles down everywhere,” since everyone wants to get in on generative AI, the person said.
And it’s happening at startups.
There are companies getting funded by “really big VC firms who are expecting this 10X-like return,” said Ayodele Odubela, a data scientist and AI policy advisor.
“They’re trying to strike while the iron is hot,” she said.
‘A big pile of nonsense’
Regardless of the employer, AI workers said much of their jobs involve working on AI for the sake of AI, rather than to solve a business problem or to serve customers directly.
“A lot of times, it’s being asked to provide a solution to a problem that doesn’t exist with a tool that you don’t want to use,” independent software engineer Kolman told CNBC.
The Microsoft AI engineer said a lot of tasks are about “trying to create AI hype” with no practical use. He recalled instances when a software engineer on his team would come up with an algorithm to solve a particular problem that didn’t involve generative AI. That solution would be pushed aside in favor of one that used a large language model, even if it were less efficient, more expensive and slower, the person said. He described the irony of using an “inferior solution” just because it involved an AI model.
A software engineer at a major internet company, which the person asked to keep unnamed due to his group’s small size, said the new team he works on dedicated to AI advancement is doing large language model research “because that’s what’s hot right now.”
The engineer has worked in machine learning for years, and described much of the work in generative AI today as an “extreme amount of vaporware and hype.” Every two weeks, the engineer said, there’s some sort of big pivot, but ultimately there’s the sense that everyone is building the same thing.
He said he often has to put together demos of AI products for the company’s board of directors on three-week timelines, even though the products are “a big pile of nonsense.” There’s a constant effort to appease investors and fight for money, he said. He gave one example of building a web app to show investors even though it wasn’t related to the team’s actual work. After the presentation, “We never touched it again,” he said.
A product manager at a fintech startup said one of his projects involved a rebranding of the company’s algorithms to AI. He also worked on a ChatGPT plug-in for customers. Executives at the company never told the team why it was needed.
The employee said it felt “out of order.” The company was starting with a solution involving AI without ever defining the problem.
An AI engineer who works at a retail surveillance startup told CNBC that he’s the only AI engineer at a company of 40 people and that he handles any responsibility related to AI, which is an overwhelming task.
He said the company’s investors have inaccurate views on the capabilities of AI, often asking him to build certain things that are “impossible for me to deliver.” He said he hopes to leave for graduate school and to publish research independently.
Risky business
The Google staffer said that about six months into her role, she felt she could finally keep her head above water. Even then, she said, the pressure continued to mount, as the demands on the team were “not sustainable.”
She used the analogy of “building the plane while flying it” to describe the company’s approach to product development.
Amazon Web Services CEO Adam Selipsky speaks with Anthropic CEO and co-founder Dario Amodei during AWS re:Invent 2023, a conference hosted by Amazon Web Services, at The Venetian Las Vegas in Las Vegas on Nov. 28, 2023.
Noah Berger | Getty Images
The Amazon AI engineer expressed a similar sentiment, saying everyone on his current team was pulled into working on a product that was running behind schedule, and that many were “thrown into it” without relevant experience and onboarding.
He also said AI accuracy, and testing in general, has taken a backseat to prioritize speed of product rollouts despite “motivational speeches” from managers about how their work will “revolutionize the industry.”
Odubela underscored the ethical risks of inadequate training for AI workers and with rushing AI projects to keep up with competition. She pointed to the problems with Google Gemini’s image creator when the product hit the market in February. In one instance, a user asked Gemini to show a German soldier in 1943, and the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, according to screenshots viewed by CNBC.
“The biggest piece that’s missing is lacking the ability to work with domain experts on projects, and the ability to even evaluate them as stringently as they should be evaluated before release,” Odubela said, regarding the current ethos in AI.
At a moment in technology when thoughtfulness is more important than ever, some of the leading companies appear to be doing the opposite.
“I think the major harm that comes is there’s no time to think critically,” Odubela said.
Elon Musk’s SpaceX, is initiating a secondary share sale that would give the company a valuation of up to $800 billion, The Wall Street Journal reported Friday.
SpaceX is also telling some investors it will consider going public possibly around the end of next year, the report said.
At the elevated price, Musk’s aerospace and defense contractor would be valued above ChatGPT maker OpenAI, which wrapped up a share sale at a $500 billion valuation in October.
SpaceX has been investing heavily in reusable rockets, launch facilities and satellites, while competing for government contracts with newer space players, including Jeff Bezos‘ Blue Origin. SpaceX is far ahead, and operates the world’s largest network of satellites in low earth orbit through Starlink, which powers satellite internet services under the same brand name.
A SpaceX IPO would include its Starlink business, which the company previously considered spinning out.
Musk recently discussed whether SpaceX would go public during Tesla‘s annual shareholders meeting last month. Musk, who is the CEO of both companies, said he doesn’t love running publicly traded businesses, in part because they draw “spurious lawsuits,” and can “make it very difficult to operate effectively.”
However, Musk said during the meeting that he wanted to “try to figure out some way for Tesla shareholders to participate in SpaceX,” adding, “maybe at some point, SpaceX should become a public company despite all the downsides.”
The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York City, U.S., November 17, 2021.
Andrew Kelly | Reuters
A U.S. judge on Friday finalized his decision for the consequences Google will face for its search monopoly ruling, adding new details to the decided remedies.
Last year, Google was found to hold an illegal monopoly in its core market of internet search, and in September, U.S. District Judge Amit Mehta ruled against the most severe consequences that were proposed by the Department of Justice.
That included the proposal of a forced sale of Google’s Chrome browser, which provides data that helps the company’s advertising business deliver targeted ads. Alphabet shares popped 8% in extended trading as investors celebrated what they viewed as minimal consequences from a historic defeat last year in the landmark antitrust case.
Investors largely shrugged off the ruling as non-impactful to Google. However some told CNBC it’s still a bite that could “sting.”
Mehta on Friday issued additional details for his ruling in new filings.
“The age-old saying ‘the devil is in the details’ may not have been devised with the drafting of an antitrust remedies judgment in mind, but it sure does fit,” Mehta wrote in one of the Friday filings.
Google did not immediately respond to a request for comment. The company has previously said it will appeal the remedies.
In August 2024, Mehta ruled that Google violated Section 2 of the Sherman Act and held a monopoly in search and related advertising. The antitrust trial started in September 2023.
In his September decision, Mehta said the company would be able to make payments to preload products, but it could not have exclusive contracts that condition payments or licensing. Google was also ordered to loosen its hold on search data. Mehta in September also ruled that Google would have to make available certain search index data and user interaction data, though “not ads data.”
The DOJ had asked Google to stop the practice of “compelled syndication,” which refers to the practice of making certain deals with companies to ensure its search engine remains the default choice in browsers and smartphones.
The judge’s September ruling didn’t end the practice entirely — Mehta ruled out that Google couldn’t enter into exclusive deals, which was a win for the company. Google pays Apple billions of dollars per year to be the default search engine on iPhones. It’s lucrative for Apple and a valuable way for Google to get more search volume and users.
Mehta’s new details
In the Friday filings, Mehta wrote that Google cannot enter into any deal like the one it’s had with Apple “unless the agreement terminates no more than one year after the date it is entered.”
This includes deals involving generative artificial intelligence products, including any “application, software, service, feature, tool, functionality, or product” that involve or use genAI or large-language models, Mehta wrote.
GenAI “plays a significant role in these remedies,” Mehta wrote.
The judge also reiterated the web index data it will require Google to share with certain competitors.
Google has to share some of the raw search interaction data it uses to train its ranking and AI systems, but it does not have to share the actual algorithms — just the data that feeds them.” In September, Mehta said those data sets represent a “small fraction” of Google’s overall traffic, but argued the company’s models are trained on data that contributed to Google’s edge over competitors.
The company must make this data available to qualified competitors at least twice, one of the Friday filing states. Google must share that data in a “syndication license” model whose term will be five years from the date the license is signed, the filing states.
Mehta on Friday also included requirements on the makeup of a technical committee that will determine the firms Google must share its data with.
Committee “members shall be experts in some combination of software engineering, information retrieval, artificial intelligence, economics, behavioral science, and data privacy and data security,” the filing states.
The judge went on to say that no committee member can have a conflict of interest, such as having worked for Google or any of its competitors in the six months prior to or one year after serving in the role.
Google is also required to appoint an internal compliance officer that will be responsible “for administering Google’s antitrust compliance program and helping to ensure compliance with this Final Judgment,” per one of the filings. The company must also appoint a senior business executive “whom Google shall make available to update the Court on Google’s compliance at regular status conferences or as otherwise ordered.”
Amazon made plenty of news this week — from advances in the cloud business to questions about its partnership with the U.S. Postal Service — leaving investors with a lot to digest. The flurry of headlines comes at the end of a challenging year. The e-commerce and cloud giant’s stock is up 4.6%, compared to the broad market S & P 500’s 16.4%, and well behind all of its Magnificent Seven peers. Despite the company showing reaccelerating growth in AWS and enhancements to its dominant Prime e-commerce ecosystem, investors remain concerned that it is losing ground in the AI race and could face margin pressure from tariffs. We believe the company has turned a corner. “A better year is ahead as management continues to prove out its AI strategy and expand operating margins,” Jeff Marks, portfolio director for Club, wrote in a report on Thursday, highlighting stocks that are set up for a bounce back in 2026. Here’s how this week’s news fits into that investment thesis: Upbeat updates at cloud event News: During Amazon ‘s annual re:Invent 2025 conference in Las Vegas, Amazon Web Services CEO Matt Garman unveiled Trainium3 , the latest version of the company’s in-house custom chip. It delivers four times the compute performance, energy efficiency, and memory bandwidth of previous generations. AWS also announced that it is already working on Trainium4. The company also revealed a series of cloud products, including advanced AI-driven platforms and agents that help customers automate workloads. Our take: We were pleased to hear that AWS continues to innovate its chip offerings to diversify its reliance on Nvidia , the industry leader in graphics processing units (GPUs). However, most of the investor focus is on bringing data center capacity online. Amazon needs to buy more Nvidia chips to catch up in AI. Also, Jim Cramer interviewed AWS CEO Matt Garman on “Mad Money” earlier this week, who was upbeat about the future growth of the cloud business. USPS ties tested News: According to a Washington Post report, Amazon could sever its relationship with the USPS when its contract expires in October 2026. Amazon likely considered the move, as it already has a shadow postal service, Amazon Logistics, that handles billions of packages annually. By removing USPS as the middleman, Amazon would have complete financial and operational control. Amazon refuted the report . Our take: For years, the e-commerce and cloud giant invested billions of dollars to build a vast logistics network that is now delivering more packages in the U.S. than UPS and FedEx . It still uses the USPS for delivery of small, low-weight packages, especially those from third-party Amazon sellers. USPS is also helpful for “last-mile delivery” in difficult-to-serve geographic areas. If the company were to eliminate the Postal Service as a middleman, it could further reduce its cost to serve, thereby improving margins. Possible IPO payday News: Anthropic, the AI startup behind the Claude chatbot, is reportedly in talks to launch one of the biggest IPOs ever in early 2026, according to the Financial Times. Anthropic responded that it had no immediate plans for an IPO and instead is “keeping our options open,” Anthropic chief communications officer Sasha de Marigny said at an Axios event in New York City on Thursday. Our take: An Anthropic public offering could be a massive payday for Amazon, which has invested about $8 billion in Anthropic. As part of that investment, Anthropic partnered with AWS as its primary cloud provider and training partner to run its massive AI training and inference workloads. An Anthropic IPO would elevate the AI startup and thereby enhance AWS’s dominance as the best-in-class cloud provider. Ultra-fast grocery delivery News: Amazon said it is testing an ultra-fast delivery service for fresh groceries, everyday essentials, and popular items, available in as little as 30 minutes, starting in Seattle and Philadelphia. Amazon Prime members get discounted delivery fees starting at $3.99 per order, compared with $13.99 for non-Prime customers. Club take: Amazon has continued to expand into online grocery and essentials, as customers increasingly opt to shop for daily essentials with the online retailer. While the retail business comes with thin margins, Amazon continues to operate it with an eye on reducing its cost to serve, which should help improve margins over time. Amazon is already second in line as the top U.S. retailer, right behind Walmart in terms of U.S. online grocery sales. As it continues to make headway in the industry, Amazon should be able to capitalize on this significant growth opportunity, especially as it harnesses its advanced AI capabilities for optimal inventory placement and demand forecasting. (Jim Cramer’s Charitable Trust is long AMZN, NVDA. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.