Amazon is making its largest outside investment in its three-decade history as it looks to gain an edge in the artificial intelligence race.
The tech giant said it will spend another $2.75 billion backing Anthropic, a San Francisco-based startup that’s widely viewed as a front-runner in generative artificial intelligence. Its foundation model and chatbot Claude competes with OpenAI and ChatGPT.
The companies announced an initial $1.25 billion investment in September, and said at the time that Amazon would invest up to $4 billion. Wednesday’s news marks Amazon’s second tranche of that funding.
Amazon will maintain a minority stake in the company and won’t have an Anthropic board seat, the company said. The deal was struck at the AI startup’s last valuation, which was $18.4 billion, according to a source.
Over the past year, Anthropic closed five different funding deals worth about $7.3 billion — and with the new Amazon investment, the total exceeds $10 billion. The company’s product directly competes with OpenAI’s ChatGPT in both the enterprise and consumer worlds, and it was founded by ex-OpenAI research executives and employees.
News of the Amazon investment comes weeks after Anthropic debuted Claude 3, its newest suite of AI models that it says are its fastest and most powerful yet. The company said the most capable of its new models outperformed OpenAI’s GPT-4 and Google‘s Gemini Ultra on industry benchmark tests, such as undergraduate level knowledge, graduate level reasoning and basic mathematics.
“Generative AI is poised to be the most transformational technology of our time, and we believe our strategic collaboration with Anthropic will further improve our customers’ experiences, and look forward to what’s next,” said Swami Sivasubramanian, vice president of data and AI at AWS cloud provider.
Amazon’s move is the latest in a spending blitz among cloud providers to stay ahead in the AI race. And it’s the second update in a week to Anthropic’s capital structure. Late Friday, bankruptcy filings showed crypto exchange FTX struck a deal with a group of buyers to sell the majority of its stake in Anthropic, confirming a CNBC report from last week.
The term generative AI entered the mainstream and business vernacular seemingly overnight, and the field has exploded over the past year, with a record $29.1 billion invested across nearly 700 deals in 2023, according to PitchBook. OpenAI’s ChatGPT first showcased the tech’s ability to produce human-like language and creative content in late 2022. Since then, OpenAI has said more than 92% of Fortune 500 companies have adopted the platform, spanning industries such as financial services, legal applications and education.
Cloud providers like Amazon Web Services don’t want to be caught flat-footed.
It’s a symbiotic relationship. As part of the agreement, Anthropic said it will use AWS as its primary cloud provider. It will also use Amazon chips to train, build and deploy its foundation models. Amazon has been designing its own chips that may eventually compete with Nvidia.
Microsoft has been on its own spending spree with a high-profile investment in OpenAI. Microsoft’s OpenAI bet has reportedly jumped to $13 billion as the startup’s valuation has topped $29 billion. Microsoft’s Azure is also OpenAI’s exclusive provider for computing power, which means the startup’s success and new business flows back to Microsoft’s cloud servers.
Google, meanwhile, has also backed Anthropic, with its own deal for Google Cloud. It agreed to invest up to $2 billion in Anthropic, comprising a $500 million cash infusion, with another $1.5 billion to be invested over time. Salesforce is also a backer.
Anthropic’s new model suite, announced earlier this month, marks the first time the company has offered “multimodality,” or adding options like photo and video capabilities to generative AI.
But multimodality, and increasingly complex AI models, also lead to more potential risks. Google recently took its AI image generator, part of its Gemini chatbot, offline after users discovered historical inaccuracies and questionable responses, which circulated widely on social media.
Anthropic’s Claude 3 does not generate images. Instead, it only allows users to upload images and other documents for analysis.
“Of course no model is perfect, and I think that’s a very important thing to say upfront,” Anthropic co-founder Daniela Amodei told CNBC earlier this month. “We’ve tried very diligently to make these models the intersection of as capable and as safe as possible. Of course there are going to be places where the model still makes something up from time to time.”
Amazon’s biggest venture bet before Anthropic was electric vehicle maker Rivian, where it invested more than $1.3 billion. That too, was a strategic partnership.
These partnerships have been picking up in the face of more antitrust scrutiny. A drop in acquisitions by the Magnificent Seven — Amazon, Microsoft, Apple, Nvidia, Alphabet, Meta and Tesla — has been offset by an increase in venture-style investing, according to Pitchbook.
AI and machine-learning investments from those seven tech companies jumped to $24.6 billion last year, up from $4.4 billion in 2022, according to Pitchbook. At the same time, Big Tech’s M&A deals fell from 40 deals in 2022 to 13 last year.
“There is a sort of paranoia motivation to invest in potential disruptors,” Pitchbook AI analyst Brendan Burke said in an interview. “The other motivation is to increase sales, and to invest in companies that are likely to use the other company’s product — they tend to be partners, more so than competitors.”
Big Tech’s spending spree in AI has come under fire for the seemingly circular nature of these agreements. By investing in AI startups, some observers, including Benchmark’s Bill Gurley, have accused the tech giants of funneling cash back to their cloud businesses, which in turn, may show up as revenue. Gurley described it as a way to “goose your own revenues.”
The U.S. Federal Trade Commission is taking a closer look at these partnerships, including Microsoft’s OpenAI deal and Google and Amazon’s Anthropic investments. What’s sometimes called “round tripping” can be illegal — especially if the aim is to mislead investors. But Amazon has said that this type of venture investing does not constitute round tripping.
FTC Chair Lina Khan announced the inquiry during the agency’s tech summit on AI, describing it as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”
Internet firm Cloudflare will start blocking artificial intelligence crawlers from accessing content without website owners’ permission or compensation by default, in a move that could significantly impact AI developers’ ability to train their models.
Starting Tuesday, every new web domain that signs up to Cloudflare will be asked if they want to allow AI crawlers, effectively giving them the ability to prevent bots from scraping data from their websites.
Cloudflare is what’s called a content delivery network, or CDN. It helps businesses deliver online content and applications faster by caching the data closer to end-users. They play a significant role in making sure people can access web content seamlessly every day.
Roughly 16% of global internet traffic goes directly through Cloudflare’s CDN, the firm estimated in a 2023 report.
“AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate,” said Matthew Prince, co-founder and CEO of Cloudflare, in a statement Tuesday.
“This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone,” he added.
What are AI crawlers?
AI crawlers are automated bots designed to extract large quantities of data from websites, databases and other sources of information to train large language models from the likes of OpenAI and Google.
Whereas the internet previously rewarded creators by directing users to original websites, according to Cloudflare, today AI crawlers are breaking that model by collecting text, articles and images to generate responses to queries in a way that users don’t need to visit the original source.
This, the company adds, is depriving publishers of vital traffic and, in turn, revenue from online advertising.
Read more CNBC tech news
Tuesday’s move builds on a tool Cloudflare launched in September last year that gave publishers the ability to block AI crawlers with a single click. Now, the company is going a step further by making this the default for all websites it provides services for.
OpenAI says it declined to participate when Cloudflare previewed its plan to block AI crawlers by default on the grounds that the content delivery network is adding a middleman to the system.
The Microsoft-backed AI lab stressed its role as a pioneer of using robots.txt, a set of code that prevents automated scraping of web data, and said its crawlers respect publisher preferences.
“AI crawlers are typically seen as more invasive and selective when it comes to the data they consumer. They have been accused of overwhelming websites and significantly impacting user experience,” Matthew Holman, a partner at U.K. law firm Cripps, told CNBC.
“If effective, the development would hinder AI chatbots’ ability to harvest data for training and search purposes,” he added. “This is likely to lead to a short term impact on AI model training and could, over the long term, affect the viability of models.”
Elon Musk announced his new company xAI, which he says has the goal to understand the true nature of the universe.
Jaap Arriens | Nurphoto | Getty Images
XAI, the artificial intelligence startup run by Elon Musk, raised a combined $10 billion in debt and equity, Morgan Stanley said.
Half of that sum was clinched through secured notes and term loans, while a separate $5 billion was secured through strategic equity investment, the bank said on Monday.
The funding gives xAI more firepower to build out infrastructure and develop its Grok AI chatbot as it looks to compete with bitter rival OpenAI, as well as with a swathe of other players including Amazon-backed Anthropic.
In May, Musk told CNBC that xAI has already installed 200,000 graphics processing units (GPUs) at its Colossus facility in Memphis, Tennessee. Colossus is xAI’s supercomputer that trains the firm’s AI. Musk at the time said that his company will continue buying chips from semiconductor giants Nvidia and AMD and that xAI is planning a 1-million-GPU facility outside of Memphis.
Addressing the latest funds raised by the company, Morgan Stanley that “the proceeds will support xAI’s continued development of cutting-edge AI solutions, including one of the world’s largest data center and its flagship Grok platform.”
xAI continues to release updates to Grok and unveiled the Grok 3 AI model in February. Musk has sought to boost the use of Grok by integrating the AI model with the X social media platform, formerly known as Twitter. In March, xAI acquired X in a deal that valued the site at $33 billion and the AI firm at $80 billion. It’s unclear if the new equity raise has changed that valuation.
xAI was not immediately available for comment.
Last year, xAI raised $6 billion at a valuation of $50 billion, CNBC reported.
Morgan Stanley said the latest debt offering was “oversubscribed and included prominent global debt investors.”
Competition among American AI startups is intensifying, with companies raising huge amounts of funding to buy chips and build infrastructure.
Musk has called Grok a “maximally truth-seeking” AI that is also “anti-woke,” in a bid to set it apart from its rivals. But this has not come without its fair share of controversy. Earlier this year, Grok responded to user queries with unrelated comments about the controversial topic of “white genocide” and South Africa.
Musk has also clashed with fellow AI leaders, including OpenAI’s Sam Altman. Most famously, Musk claimed that OpenAI, which he co-founded, has deviated from its original mission of developing AI to benefit humanity as a nonprofit and is instead focused on commercial success. In February, Musk alongside a group of investors, put in a bid of $97.4 billion to buy control of OpenAI. Altman swiftly rejected the offer.
— CNBC’s Lora Kolodny and Jonathan Vanian contributed to this report.
In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.
Ramon Costa | SOPA Images | Lightrocket | Getty Images
Huawei has open-sourced two of its artificial intelligence models — a move tech experts say will help the U.S.-blacklisted firm continue to build its AI ecosystem and expand overseas.
The Chinese tech giant announced on Monday the open-sourcing of the AI models under its Pangu series, as well as some of its model reasoning technology.
Tech experts told CNBC that Huawei’s latest announcements not only highlight how it is solidifying itself as an open-source LLM player, but also how it is strengthening its position across the entire AI value chain as it works to overcome U.S.-led AI chip export restrictions.
In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.
In its announcement Monday, Huawei called the open-source moves another key measure for Huawei’s “Ascend ecosystem strategy” that would help speed up the adoption of AI across “thousands of industries.”
The Ascend ecosystem refers to AI products built around the company’s Ascend AI chip series, which are widely considered to be China’s leading competitor to products from American chip giant Nvidia. Nvidia is restricted from selling its advanced products to China.
A Google-like strategy?
Pangu being available in an open-source manner allows developers and businesses to test the models and customize them for their needs, said Lian Jye Su, chief analyst at Omdia. “The move is expected to incentivize the use of other Huawei products,” he added.
According to experts, the coupling of Huawei’s Pangu models with the company’s AI chips and related products gives the company a unique advantage, allowing it to optimize its AI solutions and applications.
While competitors like Baidu have LLMs with broad capabilities, Huawei has focused on specialized AI models for sectors such as government, finance and manufacturing.
“Huawei is not as strong as companies like DeepSeek and Baidu at the overall software level – but it doesn’t need to be,” said Marc Einstein, research director at Counterpoint Research.
“Its objective is to ultimately use open source products to drive hardware sales, which is a completely different model from others. It also collaborates with DeepSeek, Baidu and others and will continue to do so,” he added.
Ray Wang, principal analyst at Constellation Research, said the chip-to-model strategy is similar to that of Google, a company that is also developing AI chips and AI models like its open-source Gemma models.
Huawei’s announcement on Monday could also help with its international ambitions. Huawei, along with players like Zhipu AI, has been slowly making inroads into new overseas markets.
In its announcement Monday, Huawei invited developers, corporate partners and researchers around the world to download and use its new open-source products in order to gather feedback and improve them.
“Huawei’s open-source strategy will resonate well in developing countries where enterprises are more price-sensitive as is the case with [Huawei’s] other products,” Einstein said.
As part of its global strategy, the company has also been looking to bring its latest AI data center solutions to new countries.