Connect with us

Published

on

In an unmarked office building in Austin, Texas, two small rooms contain a handful of Amazon employees designing two types of microchips for training and accelerating generative AI. These custom chips, Inferentia and Trainium, offer AWS customers an alternative to training their large language models on Nvidia GPUs, which have been getting difficult and expensive to procure. 

“The entire world would like more chips for doing generative AI, whether that’s GPUs or whether that’s Amazon’s own chips that we’re designing,” Amazon Web Services CEO Adam Selipsky told CNBC in an interview in June. “I think that we’re in a better position than anybody else on Earth to supply the capacity that our customers collectively are going to want.”

Yet others have acted faster, and invested more, to capture business from the generative AI boom. When OpenAI launched ChatGPT in November, Microsoft gained widespread attention for hosting the viral chatbot, and investing a reported $13 billion in OpenAI. It was quick to add the generative AI models to its own products, incorporating them into Bing in February. 

That same month, Google launched its own large language model, Bard, followed by a $300 million investment in OpenAI rival Anthropic. 

It wasn’t until April that Amazon announced its own family of large language models, called Titan, along with a service called Bedrock to help developers enhance software using generative AI.

“Amazon is not used to chasing markets. Amazon is used to creating markets. And I think for the first time in a long time, they are finding themselves on the back foot and they are working to play catch up,” said Chirag Dekate, VP analyst at Gartner.

Meta also recently released its own LLM, Llama 2. The open-source ChatGPT rival is now available for people to test on Microsoft‘s Azure public cloud.

Chips as ‘true differentiation’

In the long run, Dekate said, Amazon’s custom silicon could give it an edge in generative AI. 

“I think the true differentiation is the technical capabilities that they’re bringing to bear,” he said. “Because guess what? Microsoft does not have Trainium or Inferentia,” he said.

AWS quietly started production of custom silicon back in 2013 with a piece of specialized hardware called Nitro. It’s now the highest-volume AWS chip. Amazon told CNBC there is at least one in every AWS server, with a total of more than 20 million in use. 

AWS started production of custom silicon back in 2013 with this piece of specialized hardware called Nitro. Amazon told CNBC in August that Nitro is now the highest volume AWS chip, with at least one in every AWS server and a total of more than 20 million in use.

Courtesy Amazon

In 2015, Amazon bought Israeli chip startup Annapurna Labs. Then in 2018, Amazon launched its Arm-based server chip, Graviton, a rival to x86 CPUs from giants like AMD and Intel.

“Probably high single-digit to maybe 10% of total server sales are Arm, and a good chunk of those are going to be Amazon. So on the CPU side, they’ve done quite well,” said Stacy Rasgon, senior analyst at Bernstein Research.

Also in 2018, Amazon launched its AI-focused chips. That came two years after Google announced its first Tensor Processor Unit, or TPU. Microsoft has yet to announce the Athena AI chip it’s been working on, reportedly in partnership with AMD

CNBC got a behind-the-scenes tour of Amazon’s chip lab in Austin, Texas, where Trainium and Inferentia are developed and tested. VP of product Matt Wood explained what both chips are for.

“Machine learning breaks down into these two different stages. So you train the machine learning models and then you run inference against those trained models,” Wood said. “Trainium provides about 50% improvement in terms of price performance relative to any other way of training machine learning models on AWS.”

Trainium first came on the market in 2021, following the 2019 release of Inferentia, which is now on its second generation.

Inferentia allows customers “to deliver very, very low-cost, high-throughput, low-latency, machine learning inference, which is all the predictions of when you type in a prompt into your generative AI model, that’s where all that gets processed to give you the response, ” Wood said.

For now, however, Nvidia’s GPUs are still king when it comes to training models. In July, AWS launched new AI acceleration hardware powered by Nvidia H100s. 

“Nvidia chips have a massive software ecosystem that’s been built up around them over the last like 15 years that nobody else has,” Rasgon said. “The big winner from AI right now is Nvidia.”

Amazon’s custom chips, from left to right, Inferentia, Trainium and Graviton are shown at Amazon’s Seattle headquarters on July 13, 2023.

Joseph Huerta

Leveraging cloud dominance

AWS’ cloud dominance, however, is a big differentiator for Amazon.

“Amazon does not need to win headlines. Amazon already has a really strong cloud install base. All they need to do is to figure out how to enable their existing customers to expand into value creation motions using generative AI,” Dekate said.

When choosing between Amazon, Google, and Microsoft for generative AI, there are millions of AWS customers who may be drawn to Amazon because they’re already familiar with it, running other applications and storing their data there.

“It’s a question of velocity. How quickly can these companies move to develop these generative AI applications is driven by starting first on the data they have in AWS and using compute and machine learning tools that we provide,” explained Mai-Lan Tomsen Bukovec, VP of technology at AWS.

AWS is the world’s biggest cloud computing provider, with 40% of the market share in 2022, according to technology industry researcher Gartner. Although operating income has been down year-over-year for three quarters in a row, AWS still accounted for 70% of Amazon’s overall $7.7 billion operating profit in the second quarter. AWS’ operating margins have historically been far wider than those at Google Cloud.

AWS also has a growing portfolio of developer tools focused on generative AI.

“Let’s rewind the clock even before ChatGPT. It’s not like after that happened, suddenly we hurried and came up with a plan because you can’t engineer a chip in that quick a time, let alone you can’t build a Bedrock service in a matter of 2 to 3 months,” said Swami Sivasubramanian, AWS’ VP of database, analytics and machine learning.

Bedrock gives AWS customers access to large language models made by Anthropic, Stability AI, AI21 Labs and Amazon’s own Titan.

“We don’t believe that one model is going to rule the world, and we want our customers to have the state-of-the-art models from multiple providers because they are going to pick the right tool for the right job,” Sivasubramanian said.

An Amazon employee works on custom AI chips, in a jacket branded with AWS’ chip Inferentia, at the AWS chip lab in Austin, Texas, on July 25, 2023.

Katie Tarasov

One of Amazon’s newest AI offerings is AWS HealthScribe, a service unveiled in July to help doctors draft patient visit summaries using generative AI. Amazon also has SageMaker, a machine learning hub that offers algorithms, models and more. 

Another big tool is coding companion CodeWhisperer, which Amazon said has enabled developers to complete tasks 57% faster on average. Last year, Microsoft also reported productivity boosts from its coding companion, GitHub Copilot. 

In June, AWS announced a $100 million generative AI innovation “center.” 

“We have so many customers who are saying, ‘I want to do generative AI,’ but they don’t necessarily know what that means for them in the context of their own businesses. And so we’re going to bring in solutions architects and engineers and strategists and data scientists to work with them one on one,” AWS CEO Selipsky said.

Although so far AWS has focused largely on tools instead of building a competitor to ChatGPT, a recently leaked internal email shows Amazon CEO Andy Jassy is directly overseeing a new central team building out expansive large language models, too.

In the second-quarter earnings call, Jassy said a “very significant amount” of AWS business is now driven by AI and more than 20 machine learning services it offers. Some examples of customers include Philips, 3M, Old Mutual and HSBC. 

The explosive growth in AI has come with a flurry of security concerns from companies worried that employees are putting proprietary information into the training data used by public large language models.

“I can’t tell you how many Fortune 500 companies I’ve talked to who have banned ChatGPT. So with our approach to generative AI and our Bedrock service, anything you do, any model you use through Bedrock will be in your own isolated virtual private cloud environment. It’ll be encrypted, it’ll have the same AWS access controls,” Selipsky said.

For now, Amazon is only accelerating its push into generative AI, telling CNBC that “over 100,000” customers are using machine learning on AWS today. Although that’s a small percentage of AWS’s millions of customers, analysts say that could change.

“What we are not seeing is enterprises saying, ‘Oh, wait a minute, Microsoft is so ahead in generative AI, let’s just go out and let’s switch our infrastructure strategies, migrate everything to Microsoft.’ Dekate said. “If you’re already an Amazon customer, chances are you’re likely going to explore Amazon ecosystems quite extensively.”

— CNBC’s Jordan Novet contributed to this report.

CORRECTION: This article has been updated to reflect Inferentia as the chip used for machine learning inference.

Continue Reading

Technology

Cognition to buy AI startup Windsurf days after Google poached CEO in $2.4 billion licensing deal

Published

on

By

Cognition to buy AI startup Windsurf days after Google poached CEO in .4 billion licensing deal

In this photo illustration, a man seen holding a smartphone with the logo of US artificial intelligence company Cognition AI Inc. in front of website.

Timon Schneider | SOPA Images | Sipa USA | AP

Artificial intelligence startup Cognition announced it’s acquiring Windsurf, the AI coding company that lost its CEO and several other senior employees to Google just days earlier.

Cognition said on Monday that it will purchase Windsurf’s intellectual property, product, trademark, brand and talent, but didn’t disclose terms of the deal. It’s the latest development in an AI talent war, as companies like Meta, Google and OpenAI fiercely compete for top engineers and researchers.

OpenAI had been in talks to acquire Windsurf for about $3 billion in April, but the deal fell apart, and Google said on Friday that it hired Windsurf’s co-founder and CEO Varun Mohan. Google is paying $2.4 billion in licensing fees and for compensation, as CNBC previously reported.

“Every new employee of Cognition will be treated the same way as existing employees: with transparency, fairness, and deep respect for their abilities and value,” Cognition CEO Scott Wu wrote in a memo to employees on Monday. “After today, our efforts will be as a united and aligned team. There’s only one boat and we’re all in it together.”

Cognition didn’t immediately respond to CNBC’s request for comment. Windsurf directed CNBC to Cognition.

Cognition is best known for its AI coding agent named Devin, which is designed to help engineers build software faster. As of March, the startup had raised hundreds of millions of dollars at a valuation of close to $4 billion, according to a report from Bloomberg.

Both companies are backed by Peter Thiel’s Founders Fund. Other investors in Windsurf include Greenoaks, Kleiner Perkins and General Catalyst.

“I’m overwhelmed with excitement and optimism, but most of all, gratitude,” Jeff Wang, the interim CEO of Windsurf, wrote in a post on X on Monday. “Trying times reveal character, and I couldn’t be prouder of how every single person at Windsurf showed up these last three days for each other and for our users.”

Wu said that the acquisition ensures all Windsurf employees are “treated with respect and well taken care of in this transaction.” All employees will participate financially in the deal, have vesting cliffs waived for their work to date and receive fully accelerated vesting for their, according to the memo.

“There’s never been a more exciting time to build,” Wu wrote.

WATCH: Google snatches Windsurf CEO after OpenAI deal dissolves

Google snatches Windsurf CEO after OpenAI deal dissolves

Continue Reading

Technology

Musk’s xAI faces European scrutiny over Grok’s ‘horrific’ antisemitic posts

Published

on

By

Musk's xAI faces European scrutiny over Grok's 'horrific' antisemitic posts

The Grok logo is being displayed on a smartphone with Xai visible in the background in this photo illustration on April 1, 2024. 

Jonathan Raa | Nurphoto | Getty Images

The European Union on Monday called in representatives from Elon Musk‘s xAI after the company’s social network X, and chatbot Grok, generated and spread anti-semitic hate speech, including praise for Adolf Hitler, last week.

A spokesperson for the European Commission told CNBC via e-mail that a technical meeting will take place on Tuesday.

xAI did not immediately respond to a request for comment.

Sandro Gozi, a member of Italy’s parliament and member of the Renew Europe group, last week urged the Commission to hold a formal inquiry.

“The case raises serious concerns about compliance with the Digital Services Act (DSA) as well as the governance of generative AI in the Union’s digital space,” Gozi wrote.

X was already under a Commission probe for possible violations of the DSA.

Read more CNBC tech news

Grok also generated and spread offensive posts about political leaders in Poland and Turkey, including Polish Prime Minister Donald Tusk and Turkish President Recep Erdogan.

Over the weekend, xAI posted a statement apologizing for the hateful content.

“First off, we deeply apologize for the horrific behavior that many experienced. … After careful investigation, we discovered the root cause was an update to a code path upstream of the @grok bot,” the company said in the statement.

Musk and his xAI team launched a new version of Grok Wednesday night amid the backlash. Musk called it “the smartest AI in the world.”

xAI works with other businesses run and largely owned by Musk, including Tesla, the publicly traded automaker, and SpaceX, the U.S. aerospace and defense contractor.

Despite Grok’s recent outburst of hate speech, the U.S. Department of Defense awarded xAI a $200 million contract to develop AI. Anthropic, Google and OpenAI also received AI contracts.

CNBC’s April Roach contributed to this article.

Continue Reading

Technology

Meta removes 10 million Facebook profiles in effort to combat spam

Published

on

By

Meta removes 10 million Facebook profiles in effort to combat spam

Meta CEO Mark Zuckerberg looks on before the luncheon on the inauguration day of U.S. President Donald Trump’s second presidential term in Washington on Jan. 20, 2025.

Evelyn Hockstein | Reuters

Meta on Monday said it has removed about 10 million profiles for impersonating large content producers through the first half of 2025 as part of an effort by the company to combat “spammy content.”

The crackdown is part of Meta’s broader effort to make the Facebook feed more relevant and authentic by taking action against and removing accounts that engage in “spammy” behavior, such as content created using artificial intelligence tools.

As part of that initiative, Meta is also rolling out stricter measures to promote original posts from creators, the company said in a blog post.

Facebook also took action against approximately 500,000 accounts that it identified to be engaged in inauthentic behavior and spam. These actions included demoting comments and reducing distribution of content, which are intended to make it harder for these accounts to monetize their posts.

Meta said unoriginal content is when images or videos are reused without crediting the original creator. Meta said it now has technology that will detect duplicate videos and reduce the distribution of that content.

The action against spam and inauthentic content comes as Meta increases its investment in AI, with CEO Mark Zuckerberg on Monday announcing plans to spend “hundreds of billions of dollars” on AI compute infrastructure to bring the company’s first supercluster online next year.

This mandate comes at a time when AI is making it easier to mass-produce content across social media platforms. Other platforms are also taking action to combat the increase of spammy, low-quality content on social media, also known as “AI slop.”

Google’s YouTube announced a change in policy this month that prevents content that is mass-produced or repetitive from being eligible for being awarded revenue.

This announcement sparked confusion on social media, with many users believing this was a reversal on YouTube’s stance on AI content. However, YouTube clarified that the policy change is aimed at curbing unoriginal, spammy and repetitive videos.

“We welcome creators using AI tools to enhance their storytelling, and channels that use AI in their content remain eligible to monetize,” said a spokesperson for YouTube in a blog post to clarify the new policy.

YouTube’s new policy change will take effect on Tuesday.

Don’t miss these insights from CNBC PRO

Meta announces massive 'Prometheus' & 'Hyperion' data center plans

Continue Reading

Trending