A picture shows logos of the big technology companies named GAFAM, for Google, Apple, Facebook, Amazon and Microsoft, in Mulhouse, France, on June 2, 2023.
Sebastien Bozon | AFP | Getty Images
Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday.
There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job.
But it was all for nothing. The project was ultimately “deprioritized,” the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.
The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature’s software.
AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its “iPhone moment.”
The tech workers spoke to CNBC mostly on the condition that they remain unnamed because they weren’t authorized to speak to the media. The experiences they shared illustrate a broader trend across the industry, rather than a single company’s approach to AI.
They spoke of accelerated timelines, chasing rivals’ AI announcements and an overall lack of concern from their superiors about real-world effects, themes that appear common across a broad spectrum of the biggest tech companies — from Apple to Amazon to Google.
Engineers and those with other roles in the field said an increasingly large part of their job was focused on satisfying investors and not falling behind the competition rather than solving actual problems for users. Some said they were switched over to AI teams to help support fast-paced rollouts without having adequate time to train or learn about AI, even if they are new to the technology.
A common feeling they described is burnout from immense pressure, long hours and mandates that are constantly changing. Many said their employers are looking past surveillance concerns, AI’s effect on the climate and other potential harms, all in the name of speed. Some said they or their colleagues were looking for other jobs or switching out of AI departments, due to an untenable pace.
This is the dark underbelly of the generative AI gold rush. Tech companies are racing to build chatbots, agents and image generators, and they’re spending billions of dollars training their own large language models to ensure their relevance in a market that’s predicted to top $1 trillion in revenue within a decade.
Tech’s megacap companies aren’t being shy about acknowledging to investors and employees how much AI is shaping their decision-making.
Microsoft Chief Financial Officer Amy Hood, on an earnings call earlier this year, said the software company is “repivoting our workforce toward the AI-first work we’re doing without adding material number of people to the workforce,” and said Microsoft will continue to prioritize investing in AI as “the thing that’s going to shape the next decade.”
“This leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world,” Zuckerberg said.
At Amazon, CEO Andy Jassy told investors last week that the “generative AI opportunity” is almost unprecedented, and that increased capital spending is necessary to take advantage of it.
“I don’t know if any of us has seen a possibility like this in technology in a really long time, for sure since the cloud, perhaps since the Internet,” Jassy said.
Speed above everything
On the ground floor, where those investments are taking place, things can get messy.
The Amazon engineer, who lost his weekend to a project that was ultimately scuttled, said higher-ups seemed to be doing things just to “tick a checkbox,”and that speed, rather than quality, was the priority while trying to recreate products coming out of Microsoft or OpenAI.
In an emailed statement to CNBC, an Amazon spokesperson said, the company is “focused on building and deploying useful, reliable, and secure generative AI innovations that reinvent and enhance customers’ experiences,” and that Amazon is supporting its employees to “deliver those innovations.”
“It’s inaccurate and misleading to use a single employee’s anecdote to characterize the experience of all Amazon employees working in AI,” the spokesperson said.
Last year marked the beginning of the generative AI boom, following the debut of OpenAI’s ChatGPT near the end of 2022. Since then, Microsoft, Alphabet, Meta, Amazon and others have been snapping up Nvidia’s processors, which are at the core of most big AI models.
While companies such as Alphabet and Amazon continue to downsize their total headcount, they’re aggressively hiring AI experts and pouring resources into building their models and developing features for consumers and businesses.
Eric Gu, a former Apple employee who spent about four years working on AI initiatives, including for the Vision Pro headset, said that toward the end of his time at the company, he felt “boxed in.”
“Apple is a very product-focused company, so there’s this intense pressure to immediately be productive, start shipping and contributing features,” Gu said. He said that even though he was surrounded by “these brilliant people,” there was no time to really learn from them.
“It boils down to the pace at which it felt like you had to ship and perform,” said Gu, who left Apple a year ago to join AI startup Imbue, where he said he can work on equally ambitious projects but at a more measured pace.
Apple declined to comment.
Microsoft CEO Satya Nadella (R) speaks as OpenAI CEO Sam Altman (L) looks on during the OpenAI DevDay event in San Francisco on Nov. 6, 2023.
Justin Sullivan | Getty Images
An AI engineer at Microsoft said the company is engaged in an “AI rat race.”
When it comes to ethics and safeguards, he said, Microsoft has cut corners in favor of speed, leading to rushed rollouts without sufficient concerns about what could follow. The engineer said there’s a recognition that because all of the large tech companies have access to most of the same data, there’s no real moat in AI.
Microsoft didn’t provide a comment.
Morry Kolman, an independent software engineer and digital artist who has worked on viral projects that have garnered more than 200,000 users, said that in the age of rapid advancement in AI, “it’s hard to figure out where is worth investing your time.”
“And that is very conducive to burnout just in the sense that it makes it hard to believe in something,” Kolman said, adding, “I think that the biggest thing for me is that it’s not cool or fun anymore.”
At Google, an AI team member said the burnout is the result of competitive pressure, shorter timelines and a lack of resources, particularly budget and headcount. Although many top tech companies have said they are redirecting resources to AI, the required headcount, especially on a rushed timeline, doesn’t always materialize. That is certainly the case at Google, the AI staffer said.
The company’s hurried output has led to some public embarrassment. Google Gemini’s image-generation tool was released and promptly taken offline in February after users discovered historical inaccuracies and questionable responses. In early 2023, Google employees criticized leadership, most notably CEO Sundar Pichai, for what they called a “rushed” and “botched” announcement of its initial ChatGPT competitor called Bard.
The Google AI engineer, who has over a decade of experience in tech, said she understands the pressure to move fast, given the intense competition in generative AI, but it’s all happening as the industry is in cost-cutting mode, with companies slashing their workforce to meet investor demands and “increase their bottom line,” she said.
There’s also the conference schedule. AI teams had to prepare for the Google I/O developer event in May 2023, followed by Cloud Next in August and then another Cloud Next conference in April 2024. That’s a significantly shorter gap between events than normal, and created a crunch for a team that was “beholden to conference timelines” for shipping features, the Google engineer said.
Google didn’t provide a comment for this story.
The sentiment in AI is not limited to the biggest companies.
An AI researcher at a government agency reported feeling rushed to keep up. Even though the government is notorious for moving slower than companies, the pressure “trickles down everywhere,” since everyone wants to get in on generative AI, the person said.
And it’s happening at startups.
There are companies getting funded by “really big VC firms who are expecting this 10X-like return,” said Ayodele Odubela, a data scientist and AI policy advisor.
“They’re trying to strike while the iron is hot,” she said.
‘A big pile of nonsense’
Regardless of the employer, AI workers said much of their jobs involve working on AI for the sake of AI, rather than to solve a business problem or to serve customers directly.
“A lot of times, it’s being asked to provide a solution to a problem that doesn’t exist with a tool that you don’t want to use,” independent software engineer Kolman told CNBC.
The Microsoft AI engineer said a lot of tasks are about “trying to create AI hype” with no practical use. He recalled instances when a software engineer on his team would come up with an algorithm to solve a particular problem that didn’t involve generative AI. That solution would be pushed aside in favor of one that used a large language model, even if it were less efficient, more expensive and slower, the person said. He described the irony of using an “inferior solution” just because it involved an AI model.
A software engineer at a major internet company, which the person asked to keep unnamed due to his group’s small size, said the new team he works on dedicated to AI advancement is doing large language model research “because that’s what’s hot right now.”
The engineer has worked in machine learning for years, and described much of the work in generative AI today as an “extreme amount of vaporware and hype.” Every two weeks, the engineer said, there’s some sort of big pivot, but ultimately there’s the sense that everyone is building the same thing.
He said he often has to put together demos of AI products for the company’s board of directors on three-week timelines, even though the products are “a big pile of nonsense.” There’s a constant effort to appease investors and fight for money, he said. He gave one example of building a web app to show investors even though it wasn’t related to the team’s actual work. After the presentation, “We never touched it again,” he said.
A product manager at a fintech startup said one of his projects involved a rebranding of the company’s algorithms to AI. He also worked on a ChatGPT plug-in for customers. Executives at the company never told the team why it was needed.
The employee said it felt “out of order.” The company was starting with a solution involving AI without ever defining the problem.
An AI engineer who works at a retail surveillance startup told CNBC that he’s the only AI engineer at a company of 40 people and that he handles any responsibility related to AI, which is an overwhelming task.
He said the company’s investors have inaccurate views on the capabilities of AI, often asking him to build certain things that are “impossible for me to deliver.” He said he hopes to leave for graduate school and to publish research independently.
Risky business
The Google staffer said that about six months into her role, she felt she could finally keep her head above water. Even then, she said, the pressure continued to mount, as the demands on the team were “not sustainable.”
She used the analogy of “building the plane while flying it” to describe the company’s approach to product development.
Amazon Web Services CEO Adam Selipsky speaks with Anthropic CEO and co-founder Dario Amodei during AWS re:Invent 2023, a conference hosted by Amazon Web Services, at The Venetian Las Vegas in Las Vegas on Nov. 28, 2023.
Noah Berger | Getty Images
The Amazon AI engineer expressed a similar sentiment, saying everyone on his current team was pulled into working on a product that was running behind schedule, and that many were “thrown into it” without relevant experience and onboarding.
He also said AI accuracy, and testing in general, has taken a backseat to prioritize speed of product rollouts despite “motivational speeches” from managers about how their work will “revolutionize the industry.”
Odubela underscored the ethical risks of inadequate training for AI workers and with rushing AI projects to keep up with competition. She pointed to the problems with Google Gemini’s image creator when the product hit the market in February. In one instance, a user asked Gemini to show a German soldier in 1943, and the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, according to screenshots viewed by CNBC.
“The biggest piece that’s missing is lacking the ability to work with domain experts on projects, and the ability to even evaluate them as stringently as they should be evaluated before release,” Odubela said, regarding the current ethos in AI.
At a moment in technology when thoughtfulness is more important than ever, some of the leading companies appear to be doing the opposite.
“I think the major harm that comes is there’s no time to think critically,” Odubela said.
Jensen Huang, co-founder and CEO of Nvidia, displays the new Blackwell GPU chip during the Nvidia GPU Technology Conference in San Jose, California, on March 18, 2024.
David Paul Morris/Bloomberg via Getty Images
Nvidia CEO Jensen Huang is expected to reveal details about Rubin, the chipmaker’s next AI graphics processor, on Tuesday at the company’s annual GTC conference.
While other tech companies usually name their products using combinations of inscrutable letters and numbers, most of Nvidia’s most recent GPU architectures have been named after famous women scientists.
Nvidia is naming its next critical AI chip platform after Vera Rubin, an American astronomer.
The company has never explained its naming convention, and hasn’t emphasized the diversity aspect of its choices, but Nvidia’s chip names that highlight women and minority scientists are one of the most visible efforts to honor diversity in the tech industry during a period where diversity, equity and inclusion, or DEI, initiatives are being slashed by the Trump administration.
Rubin discovered a lot of what is known about “dark matter,” a form of matter that could make up a quarter of the matter of the universe and which doesn’t emit light or radiation, and she advocated for women in science throughout her career.
Nvidia has been naming its architectures after scientists since 1998, when its first chips were based on the company’s “Fahrenheit” microarchitecture. It’s part of the company’s culture – Nvidia used to sell an employee-only t-shirt with cartoons of several famous scientists on it.
It’s one of Nvidia’s quirks that has received more attention as it’s risen to become one of the three most-valuable tech companies and one of the most important suppliers to Google, Microsoft, Amazon, OpenAI, Tesla and Meta.
Investors want to hear on Tuesday how fast the Rubin chips will be, what configurations it will come in and when it might start shipping.
Before revealing a new architecture, Nvidia CEO Jensen Huang usually gives a one-sentence biography of the scientist it’s named after.
“I’d like to introduce you to a very, very big GPU named after David Blackwell, mathematician, game theorist, probability,” Huang said at last year’s GTC conference. “We thought it was a perfect name.”
Rubin is a fitting name for Nvidia’s next chip, which comes as the company tries to solidify the gains it has made in recent years as the leader in AI hardware. “Vera” will refer to Nvidia’s next-generation central processor, and “Rubin” will refer to Nvidia’s new GPU.
FILE PHOTO: World famous astronomer Vera Rubin, 82, in her office at Carnegie Institution of Washington in Washington, DC on January 14, 2010.
Linda Davidson | The Washington Post | Getty Images
Born in Philadelphia in 1928, Rubin studied deep space and worked with other scientists to develop better telescopes and instruments that could collect more detailed data about the universe. In 1968, according to a Nova documentary, she started observing the Andromeda galaxy and collecting the data that would upend science’s understanding of our universe.
Her primary claim to fame came after she observed how quickly galaxies rotate.
“The presumption was that the stars near the center of a galaxy would be orbiting very rapidly, and stars at the outside would be going very slowly,” Rubin said in 1987.
But Rubin realized that she was observing that outer stars were moving quickly, contrary to expectations. They weren’t flying out of orbit, which meant that there had to be more mass scientists weren’t observing — confirming the concept of dark matter.
She was acclaimed during her lifetime, published over 100 papers and held three advanced degrees, but she still faced discrimination because of her sex. Early in her career, Rubin wasn’t allowed to collect her own data, and some observatories didn’t allow women, according to the documentary.
Rubin died in 2016. In 2019, the Vera C. Rubin Observatory, a state-of-the-art telescope in Chile, was named after her. A biography on the federally-funded observatory’s website was edited to remove details about her advocacy for women in science earlier this year, according to ProPublica.
“I hope you will love your work as I love doing astronomy,” Rubin said at a commencement address in 1996. “I hope that you will fight injustice and discrimination in all its guises.”
Rubin isn’t the first woman to be honored with an Nvidia chip named after her.
Before Blackwell, who was the first Black American inducted into the National Academy of Sciences, Nvidia’s most advanced AI chip family was Hopper, named after American computer scientist Grace Hopper, who coined the term “bug” to refer to computer glitches. In 2022, Nvidia released its “Ada Lovelace” architecture, named after the British mathematician who pioneered computer algorithms in the 19th century.
The scientist names used to be a secondary naming convention, taking a back seat to the actual product name, and primarily appearing in marketing copy. Nvidia users more frequently referred to the “H100” chip or marketing names for consumer graphics cards like GeForce RTX 3090.
But last year, Huang emphasized that Blackwell wasn’t a single chip, it was a technology platform, and Nvidia increasingly started using the term “Blackwell” to refer to all of the company’s latest-generation AI products, such as its GB200 chip and DGX server racks.
It’s critical for Nvidia that Rubin achieve the same last-name familiarity level as Hopper and Blackwell.
The company’s sales more than doubled in its fiscal 2025, ended January, to $124.62 billion, thanks to durable sales for the company’s Hopper chips and early demand for the company’s Blackwell chips.
In order to keep growth rising, Nvidia needs to deliver a next-generation chip that justifies its cost and improves on the previous generation’s speeds, power efficiency and cost of ownership.
The company has targeted 2026 for a rollout of the Vera chips, according to an investor presentation last fall. In addition to Vera Rubin, Nvidia is expected to discuss Blackwell Ultra, an updated version of its Blackwell chips that analysts expect the company to start selling later this year.
Huang also teased during an earnings call last month that he’ll show the “next click” after Vera Rubin. That architecture will likely be named after a scientist, too.
“These products should excite partners at the conference ranging from Microsoft to Dell to sovereigns, which normally would please investors,” Melius Research analyst Ben Reitzes wrote in a note on Monday.
Tuesday’s keynote will also be a test of Nvidia’s relatively new release cadence, where it strives to reveal new chips on an annual basis. Investors will also want to see whether Nvidia can continue to impress tech critics and developers while releasing new chip families on a faster schedule than it’s used to. Blackwell was announced last March, and its sales started showing up in Nvidia’s October quarter.
The texts first started arriving on Eric Moyer’s phone in February. They warned him that if he didn’t pay his FastTrak lane tolls by February 21, he could face a fine and lose his license.
The Virginia Beach resident did what the majority of people do: ignore them. But there was enough hesitation to at least double-check.
“I knew they were a scam immediately; however, I had to verify my intuition, of course; I accessed my E-ZPass account to ensure, plus I knew that I had not utilized a toll road in recent months,” Moyer said, adding that his wife’s phone also received the same blitz of menacing messages.
But not everyone ignores them, and, unlike Moyer, not everyone has an E-ZPass account to check. Some people do pay, which makes the whole endeavor worthwhile for hackers, and which is why the toll texts keep coming. And coming.
In fact, cybersecurity firm Trend Micro has seen a 900% increase in searches for “toll road scams” in the last three months, meaning, the company says, that these scams are hitting everyone, everywhere, and hard.
“It is obviously working; they are getting victims to pay it. This one apparently seems to be going on a lot longer than we normally see these things,” said Jon Clay, vice president of threat intelligence at Trend Micro.
In this case, the “they” are likely Chinese criminal gangs working from wherever they can find a foothold, including Southeast Asia, which Clay says Chinese criminal gangs are turning into a hot spot.
“They are basically building big data centers in the jungle,” Clay said, and staffing them with scammers.
Clay also says that absent a big news event that scammers can latch onto, the toll scam fills the void. But he said tax-time scams will soon really ramp up.
What really makes the toll scam effective is that it is cheap and easy for scammers to utilize. They can buy numbers in bulk and send out millions of texts. A handful of people will be persuaded to pay the $3 toll fee to avoid the (fictional) threat of fines or licensing revocation. But Clay says they aren’t just interested in the $3; it’s your personal information that you’ll enter that has far more value.
“Once they have that, they can scam you for other things,” Clay said.
Aidan Holland, senior security researcher at threat research platform Censys, has been extensively tracking toll scams and agrees that they are likely perpetuated by Chinese criminals overseas. Holland has identified 60,000 domains, which he estimates cost the criminals $90,000 to buy in bulk and use to launch attacks.
“You don’t invest that much unless you are getting some kind of return,” Holland said.
State-run toll systems across the U.S. targeted
The domains use variations of state-run toll systems like Georgia’s Peach Pass, Florida’s Sun Pass, or Texas’s Texas Tag. They also have more domains from generic-sounding toll systems for people who don’t have a specific toll system in their state. He’s traced the domains to Chinese networks, which point to a Chinese origin.
Apple’s iPhones are supposed to have a safety feature that strips the link from the text, but hackers are finding ways to evade that, making it easier to fall for the ruse.
“They are constantly changing tactics,” Holland said.
Apple did not respond to a request for comment.
“Apple doesn’t do anything about it. … Android will add it to their spam list so you won’t get texts from the same number, but then the scammers will just change numbers,” Clay said. “Apple has done a wonderful job of telling everyone their phone is secure, and they are, but not from this kind of attack,” Clay added.
Across the 241 miles of the Ohio Turnpike, the scam first appeared on the state’s radar in April 2024, but it has been ramping up recently, said a spokesman for the Ohio public road system.
“Over the past two weeks, our customer service center has received a record number of calls from customers and mobile device users in area codes across Ohio and elsewhere about the texting scam,” the spokesman said. The good news, he says, is that the calls have been tailing off in recent days, likely because of growing awareness, and he said personally he knows of few who have fallen for the scam.
However, the issue has become acute enough that the Ohio Turnpike and Infrastructure Commission produced a public service video to raise awareness.
Ultimately, scammers are banking on human nature to make scams effective.
“Scammers want people to panic, not pause, so they use fear and urgency to rush people into clicking before they spot the scam,” said Amy Bunn, online safety advocate at McAfee. Bunn says that AI tools are making this type of scan more prevalent.
“Greater access to AI tools helps cybercriminals create a higher volume of convincing text messages that trick people into sharing sensitive personal or payment information – like they’d enter when paying a toll road fine,” Bunn said. McAfee research found that toll scams nearly quadrupled in volume from early January to the end of February this year.
Even if you know the text is fraudulent, she says it is important to avoid the urge to text them a few choice words or a simple “stop.”
Don’t engage at all.
“Even a seemingly innocent reply to the message can tip scammers off that your number is live and active,” Bunn said.
Holland worries that the ones falling for the scam are society’s most vulnerable: the elderly and less tech-savvy people, even children who may receive the messages on their phones.
Others have an easier out for spotting a fraud.
“I got my first text yesterday; I just deleted it. The funny thing about it is that I don’t drive and haven’t for over 30 years,” said Millie Lewis, 77, of Cleves, Ohio.
Shantanu Narayen, Chairman and CEO of Adobe Systems addresses the gathering on the first day of the three-day B20 Summit in New Delhi on August 25, 2023.
Sajjad Hussain | AFP | Getty Images
Adobe shares dropped 13% following the company’s quarterly earnings report as investors fretted over lingering growth concerns and the software maker’s artificial intelligence monetization strategy.
The sell-off came despite better-than-expected results, which included adjusted earnings of $5.08 per share and $5.71 billion in revenue. That surpassed analysts’ estimates of $4.97 in earnings per share and $5.66 billion in revenue, according to LSEG.
Adobe called for $4.95 to $5.00 in adjusted earnings per share for the current quarter on $5.77 billion to $5.82 billion in revenue. Analysts polled by LSEG had expected $5.00 per share on $5.80 billion in revenue.
Worries have mounted in recent months that the company is falling behind some competitors and losing its advantage in generative AI. The company’s annualized recurring revenue from AI contributed $125 million during the period and Adobe expects that to double by the end of the fiscal year.
Bernstein’s Mark Moerdler, who recommends buying on the stock, wrote in a report that to “believe that ADBE is an AI winner and that AI is not replacing existing revenue streams, investors need to be able to observe longer-term trends.”
Keith Weiss, an analyst at Morgan Stanley, wrote that “new disclosure of GenAI contribution is a step in the right direction,” but that investors need to see a “clearer roadmap” at the company’s investor meeting at its annual conference next week. Morgan Stanley has the equivalent of a buy rating on the stock.
In an interview with CNBC’s “Closing Bell: Overtime” on Wednesday, Adobe CEO Shantanu Narayen said that, “Not only are we infusing AI in our existing products and delivering value, but it’s clear that the innovation that we’ve delivered is creating new revenue streams.”
Total revenue increased 10% year over year in the quarter that ended on Feb. 28, according to a statement. Net income of $1.81 billion, or $4.14 per share, was up from $620 million, or $1.36 per share, in the same quarter a year earlier. Adjusted earnings per share exclude impact from stock-based compensation and income taxes.
For the 2025 fiscal year, the company expects adjusted earnings per share of between $20.20 and $20.50, with $23.3 billion to $23.55 billion in revenue. That implies about 9% growth at the middle of the range. The LSEG consensus was for earnings of $20.40 per share, with $23.49 billion in revenue.