Tech giants aren’t doing much acquiring these days, due mostly to an unfavorable regulatory environment. But they’re finding other ways to spend billions of dollars on the next big thing.
Amazon’s $2.75 billion investment in artificial intelligence startup Anthropic, announced this week, was its largest venture deal and the latest example of the AI gold rush that’s prompting the biggest tech companies to fling open their wallets.
Anthropic is the developer behind the AI model Claude, which competes with GPT from Microsoft-backed OpenAI, and Google’s Gemini. Along with Meta and Apple, they’re all racing to integrate generative AI into their vast portfolios of products and features to ensure they don’t fall behind in a market that’s predicted to top $1 billion in revenue within a decade.
In 2023, investors pumped $29.1 billion combined into nearly 700 generative AI deals, an increase of more than 260% in value from the prior year, according to PitchBook.
A significant chunk of that money was strategic, in that it came from tech companies rather than venture capitalists or other institutions. Fred Havemeyer, head of U.S. AI and software research at Macquarie, said a fear of missing out is one factor driving their decisions.
“They definitely don’t want to miss out on being part of the AI ecosystem,” Havemeyer said. “I definitely think that there’s FOMO in this marketplace.”
The hefty investments are necessary because AI models are notoriously expensive to build and train, requiring thousands of specialized chips that, to date, have largely come from Nvidia. Meta, which is developing its own model called Llama, has said it’s spending billions on Nvidia’s graphics processing units, one of the many companies that’s helped the chipmaker bolster year-over-year revenue by more than 250%.
Whether going the building or investing route, there are a finite number of companies that can afford to play in the market. In addition to developing the chips, Nvidia has emerged as one of Silicon Valley’s top investors, taking stakes in a number of emerging AI companies, partly as a way to make sure its technology gets widely deployed. Similarly, Microsoft, Google and Amazon sometimes offer cloud credits as part of their investments.
In the Amazon-Anthropic deal announced on Wednesday, the two companies said they’ll work closely together in a variety of ways. Anthropic will be using Amazon Web Services for its computing needs as well as Amazon’s chips. Anthropic’s models will be distributed by Amazon to AWS customers.
Earlier this month, Anthropic launched Claude 3, its most powerful model and one that it says lets users upload photos, charts, documents and other types of unstructured data for analysis and answers.
Microsoft got into the business of generative AI investing earlier, putting $1 billion into OpenAI in 2019. The size of its investment has since swelled to about $13 billion. Microsoft heavily uses OpenAI’s model and offers open source models on its Azure cloud.
Alphabet is playing the part of builder and investor. The company has refocused much of its product development on generative AI, and its newly rebranded Gemini model, adding features into search, documents, maps and elsewhere. Last year, Google committed to invest $2 billion in Anthropic, after previously confirming it had taken a 10% stake in the startup alongside a large cloud contract between the two companies.
In this photo illustration, Gemini Ai is seen on a phone on March 18, 2024 in New York City.
Michael M. Santiago | Getty Images
Havemeyer said tech giants aren’t just throwing money into the “hype cycle,” as these investments in AI startups align with their product road maps.
“I don’t think it’s frivolous,” he said.
Havemeyer said that alliances with big cloud providers not only bring much-needed cash to startups but also help them sign up customers.
The cloud companies are saying, “Come to us, work on our platform, have native access to the latest and greatest AI models, and also use our infrastructure,” Havemeyer said. “It’s also part of a much larger ecosystem play.”
“We’re seeing a lot of alliances appearing among those hyperscalers that have substantial scale, infrastructure and very deep pockets,” he added.
‘Shape the next decade’
In recent earnings calls, tech execs reiterated their focus on generative AI, making it clear to investors that they have to spend money to make money, whether it’s on internal development or through investing in startups.
Microsoft Chief Financial Officer Amy Hood said last year the company was adjusting its “workforce toward the AI-first work we’re doing without adding material number of people to the workforce.” She said Microsoft will continue to prioritize investing in AI as “the thing that’s going to shape the next decade.”
Leaders of Google, Apple and Amazon have also suggested to investors that they’re willing to cut costs broadly across departments in order to redirect more funding toward their AI efforts.
Startups are among the beneficiaries.
Microsoft has taken stakes in Mistral, Figure and Humane, in addition to OpenAI. The company invested in Inflection AI before the startup essentially dissolved and joined Microsoft this month. Mistral is an open source-focused company that uses Azure’s cloud and offers its service to Azure clients.
Startup Figure AI is developing general-purpose humanoid robots.
Figure AI
Figure, a startup seeking to build a robot that walks like a human, has raised money from Microsoft, OpenAI and Nvidia and was valued last month at $2.6 billion.
Amazon’s biggest bet is Anthropic, pouring in a total of $4 billion so far. The company has also invested in open source AI platform developer Hugging Face.
Google’s investments include Essential AI, which is developing consumer AI programs and is backed by AMD and Nvidia. Alphabet and Nvidia are also investors in Runway ML, a generative AI company known for its video-editing and visual effects tools. Others in Nvidia’s portfolio include Mistral, Perplexity and Cohere.
Meanwhile, many of the Big Tech companies continue to spend internally on developing their own models.
Microsoft has invested in many of the techniques underpinning generative AI through its Microsoft Research division. Amazon reportedly has plans to train a bigger, more data-hungry model than even OpenAI’s GPT-4.
Apple researchers recently published details of their work on MM1, a family of small AI models that can take both text and visual input. Apple is in a different position than its peers in that it doesn’t sell a cloud service. Still, the tech giant is reportedly looking for AI partners, including potentially Google in the U.S. and Baidu in China. An Apple representative declined to comment on AI partners.
Creativity in dealmaking
Daniel Newman, CEO of technology analysis firm Futurum Group, said tech companies are having to get clever when it comes to investing in AI.
For example, OpenAI’s investment from Microsoft included profit sharing in a nonprofit wing, as well as credits to use Microsoft’s cloud service. Microsoft’s deal for Inflection AI amounted to an expensive acquihire, with some reports putting the total outlay at $1 billion. As part of the transaction, Microsoft hired Inflection AI founder Mustafa Suleyman to lead Copilot AI initiatives.
“I think we’re starting to see some creativity and dealmaking,” said Newman. With respect to Amazon’s agreement with Anthropic, he said an acquisition would be “a lot harder than investing.”
That’s because regulators across the globe are cracking down on Big Tech, making it more difficult to do sizable acquisitions. Even the investments are attracting scrutiny.
In January, the Federal Trade Commission announced it will conduct an extensive inquiry into the field’s biggest players in AI, including Amazon, Alphabet, Microsoft, Anthropic and OpenAI.
FTC Chair Lina Khan described the probe as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.” The regulator has the authority to order companies to file specific reports or answer questions in writing about their businesses.
“We know regulators are becoming increasingly focused on the traditional path of closing an acquisition,” Newman said. “Right now, the game is having access to the most fundamental IP.”
The logo of Japanese company SoftBank Group is seen outside the company’s headquarters in Tokyo on January 22, 2025.
Kazuhiro Nogi | Afp | Getty Images
SoftBank Group said Wednesday that it will acquire Ampere Computing, a startup that designed an Arm-based server chip, for $6.5 billion. The company expects the deal to close in the second half of 2025, according to a statement.
Carlyle Group and Oracle both have committed to selling their stakes in Ampere, SoftBank said.
Ampere will operate as an independent subsidiary and will keep its headquarters in Santa Clara, California, the statement said.
“Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision, and deepens our commitment to AI innovation in the United States,” SoftBank Group Chairman and CEO Masayoshi Son was quoted as saying in the statement.
The startup has 1,000 semiconductor engineers, SoftBank said in a separate statement.
Chips that use Arm’s instruction set represent an alternative to chips based on the x86 architecture, which Intel and AMD sell. Arm-based chips often consume less energy. Ampere’s founder and CEO, Renee James, established the startup in 2017 after 28 years at Intel, where she rose to the position of president.
Leading cloud infrastructure provider Amazon Web Services offers Graviton Arm chip for rent that have become popular among large customers. In October, Microsoft started selling access to its own Cobalt 100 Arm-based cloud computing instances.
This is breaking news. Please refresh for updates.
Nvidia CEO Jensen Huang introduces new products as he delivers the keynote address at the GTC AI Conference in San Jose, California, on March 18, 2025.
Josh Edelson | AFP | Getty Images
At the end of Nvidia CEO Jensen Huang’s unscripted two-hour keynote on Tuesday, his message was clear: Get the fastest chips that the company makes.
Speaking at Nvidia’s GTC conference, Huang said that questions clients have about the cost and return on investment the company’s graphics processors, or GPUs, will go away with faster chips that can be digitally sliced and used to serve artificial intelligence to millions of people at the same time.
“Over the next 10 years, because we could see improving performance so dramatically, speed is the best cost-reduction system,” Huang said in a meeting with journalists shortly after his GTC keynote.
The company dedicated 10 minutes during Huang’s speech to explain the economics of faster chips for cloud providers, complete with Huang doing envelope math out loud on each chip’s cost-per-token, a measure of how much it costs to create one unit of AI output.
Huang told reporters that he presented the math because that’s what’s on the mind of hyperscale cloud and AI companies.
The company’s Blackwell Ultra systems, coming out this year, could provide data centers 50 times more revenue than its Hopper systems because it’s so much faster at serving AI to multiple users, Nvidia says.
Investors worry about whether the four major cloud providers — Microsoft, Google, Amazon and Oracle — could slow down their torrid pace of capital expenditures centered around pricey AI chips. Nvidia doesn’t reveal prices for its AI chips, but analysts say Blackwell can cost $40,000 per GPU.
Already, the four largest cloud providers have bought 3.6 million Blackwell GPUs, under Nvidia’s new convention that counts each Blackwell as 2 GPUs. That’s up from 1.3 million Hopper GPUs, Blackwell’s predecessor, Nvidia said Tuesday.
The company decided to announce its roadmap for 2027’s Rubin Next and 2028’s Feynman AI chips, Huang said, because cloud customers are already planning expensive data centers and want to know the broad strokes of Nvidia’s plans.
“We know right now, as we speak, in a couple of years, several hundred billion dollars of AI infrastructure” will be built, Huang said. “You’ve got the budget approved. You got the power approved. You got the land.”
Huang dismissed the notion that custom chips from cloud providers could challenge Nvidia’s GPUs, arguing they’re not flexible enough for fast-moving AI algorithms. He also expressed doubt that many of the recently announced custom AI chips, known within the industry as ASICs, would make it to market.
“A lot of ASICs get canceled,” Huang said. “The ASIC still has to be better than the best.”
Huang said his is focus on making sure those big projects use the latest and greatest Nvidia systems.
“So the question is, what do you want for several $100 billion?” Huang said.
Microsoft’s Amy Coleman (L) and Kathleen Hogan (R).
Source: Microsoft
Microsoft said Wednesday that company veteran Amy Coleman will become its new executive vice president and chief people officer, succeeding Kathleen Hogan, who has held the position for the past decade.
Hogan will remain an executive vice president but move to a newly established Office of Strategy and Transformation, which is an expansion of the office of the CEO. She will join Microsoft’s group of top executives, reporting directly to CEO Satya Nadella.
Coleman is stepping into a major role, given that Microsoft is among the largest employers in the U.S., with 228,000 total employees as of June 2024. She has worked at the company for more than 25 years over two stints, having first joined as a compensation manager in 1996.
Hogan will remain on the senior leadership team.
“Amy has led HR for our corporate functions across the company for the past six years, following various HR roles partnering across engineering, sales, marketing, and business development spanning 25 years,” Nadella wrote in a memo to employees.
“In that time, she has been a trusted advisor to both Kathleen and to me as she orchestrated many cross-company workstreams as we evolved our culture, improved our employee engagement model, established our employee relations team, and drove enterprise crisis response for our people,” he wrote.
Hogan arrived at Microsoft in 2003 after being a development manager at Oracle and a partner at McKinsey. Under Hogan, some of Microsoft’s human resources practices evolved. She has emphasized the importance of employees having a growth mindset instead of a fixed mindset, drawing on concepts from psychologist Carol Dweck.
“We came up with some big symbolic changes to show that we really were serious about driving culture change, from changing the performance-review system to changing our all-hands company meeting, to our monthly Q&A with the employees,” Hogan said in a 2019 interview with Business Insider.
Hogan pushed for managers to evaluate the inclusivity of employees and oversaw changes in the handling of internal sexual harassment cases.
Coleman had been Microsoft’s corporate vice president for human resources and corporate functions for the past four years. In that role, she was responsible for 200 HR workers and led the development of Microsoft’s hybrid work approach, as well as the HR aspect of the company’s Covid response, according to her LinkedIn profile.