Connect with us

Published

on

Nvidia CEO Jensen Huang speaks during a press conference at The MGM during CES 2018 in Las Vegas on January 7, 2018.

Mandel Ngan | AFP | Getty Images

Software that can write passages of text or draw pictures that look like a human created them has kicked off a gold rush in the technology industry.

Companies like Microsoft and Google are fighting to integrate cutting-edge AI into their search engines, as billion-dollar competitors such as OpenAI and Stable Diffusion race ahead and release their software to the public.

Powering many of these applications is a roughly $10,000 chip that’s become one of the most critical tools in the artificial intelligence industry: The Nvidia A100.

The A100 has become the “workhorse” for artificial intelligence professionals at the moment, said Nathan Benaich, an investor who publishes a newsletter and report covering the AI industry, including a partial list of supercomputers using A100s. Nvidia takes 95% of the market for graphics processors that can be used for machine learning, according to New Street Research.

A.I. is the catalyst behind Nividia's earnings beat, says Susquehanna's Christopher Rolland

The A100 is ideally suited for the kind of machine learning models that power tools like ChatGPT, Bing AI, or Stable Diffusion. It’s able to perform many simple calculations simultaneously, which is important for training and using neural network models.

The technology behind the A100 was initially used to render sophisticated 3D graphics in games. It’s often called a graphics processor, or GPU, but these days Nvidia’s A100 is configured and targeted at machine learning tasks and runs in data centers, not inside glowing gaming PCs.

Big companies or startups working on software like chatbots and image generators require hundreds or thousands of Nvidia’s chips, and either purchase them on their own or secure access to the computers from a cloud provider.

Hundreds of GPUs are required to train artificial intelligence models, like large language models. The chips need to be powerful enough to crunch terabytes of data quickly to recognize patterns. After that, GPUs like the A100 are also needed for “inference,” or using the model to generate text, make predictions, or identify objects inside photos.

This means that AI companies need access to a lot of A100s. Some entrepreneurs in the space even see the number of A100s they have access to as a sign of progress.

“A year ago we had 32 A100s,” Stability AI CEO Emad Mostaque wrote on Twitter in January. “Dream big and stack moar GPUs kids. Brrr.” Stability AI is the company that helped develop Stable Diffusion, an image generator that drew attention last fall, and reportedly has a valuation of over $1 billion.

Now, Stability AI has access to over 5,400 A100 GPUs, according to one estimate from the State of AI report, which charts and tracks which companies and universities have the largest collection of A100 GPUs — although it doesn’t include cloud providers, which don’t publish their numbers publicly.

Nvidia’s riding the A.I. train

Nvidia stands to benefit from the AI hype cycle. During Wednesday’s fiscal fourth-quarter earnings report, although overall sales declined 21%, investors pushed the stock up about 14% on Thursday, mainly because the company’s AI chip business — reported as data centers — rose by 11% to more than $3.6 billion in sales during the quarter, showing continued growth.

Nvidia shares are up 65% so far in 2023, outpacing the S&P 500 and other semiconductor stocks alike.

Nvidia CEO Jensen Huang couldn’t stop talking about AI on a call with analysts on Wednesday, suggesting that the recent boom in artificial intelligence is at the center of the company’s strategy.

“The activity around the AI infrastructure that we built, and the activity around inferencing using Hopper and Ampere to influence large language models has just gone through the roof in the last 60 days,” Huang said. “There’s no question that whatever our views are of this year as we enter the year has been fairly dramatically changed as a result of the last 60, 90 days.”

Ampere is Nvidia’s code name for the A100 generation of chips. Hopper is the code name for the new generation, including H100, which recently started shipping.

More computers needed

Nvidia A100 processor

Nvidia

Compared to other kinds of software, like serving a webpage, which uses processing power occasionally in bursts for microseconds, machine learning tasks can take up the whole computer’s processing power, sometimes for hours or days.

This means companies that find themselves with a hit AI product often need to acquire more GPUs to handle peak periods or improve their models.

These GPUs aren’t cheap. In addition to a single A100 on a card that can be slotted into an existing server, many data centers use a system that includes eight A100 GPUs working together.

This system, Nvidia’s DGX A100, has a suggested price of nearly $200,000, although it comes with the chips needed. On Wednesday, Nvidia said it would sell cloud access to DGX systems directly, which will likely reduce the entry cost for tinkerers and researchers.

It’s easy to see how the cost of A100s can add up.

For example, an estimate from New Street Research found that the OpenAI-based ChatGPT model inside Bing’s search could require 8 GPUs to deliver a response to a question in less than one second.

At that rate, Microsoft would need over 20,000 8-GPU servers just to deploy the model in Bing to everyone, suggesting Microsoft’s feature could cost $4 billion in infrastructure spending.

“If you’re from Microsoft, and you want to scale that, at the scale of Bing, that’s maybe $4 billion. If you want to scale at the scale of Google, which serves 8 or 9 billion queries every day, you actually need to spend $80 billion on DGXs.” said Antoine Chkaiban, a technology analyst at New Street Research. “The numbers we came up with are huge. But they’re simply the reflection of the fact that every single user taking to such a large language model requires a massive supercomputer while they’re using it.”

The latest version of Stable Diffusion, an image generator, was trained on 256 A100 GPUs, or 32 machines with 8 A100s each, according to information online posted by Stability AI, totaling 200,000 compute hours.

At the market price, training the model alone cost $600,000, Stability AI CEO Mostaque said on Twitter, suggesting in a tweet exchange the price was unusually inexpensive compared to rivals. That doesn’t count the cost of “inference,” or deploying the model.

Huang, Nvidia’s CEO, said in an interview with CNBC’s Katie Tarasov that the company’s products are actually inexpensive for the amount of computation that these kinds of models need.

“We took what otherwise would be a $1 billion data center running CPUs, and we shrunk it down into a data center of $100 million,” Huang said. “Now, $100 million, when you put that in the cloud and shared by 100 companies, is almost nothing.”

Huang said that Nvidia’s GPUs allow startups to train models for a much lower cost than if they used a traditional computer processor.

“Now you could build something like a large language model, like a GPT, for something like $10, $20 million,” Huang said. “That’s really, really affordable.”

New competition

Nvidia isn’t the only company making GPUs for artificial intelligence uses. AMD and Intel have competing graphics processors, and big cloud companies like Google and Amazon are developing and deploying their own chips specially designed for AI workloads.

Still, “AI hardware remains strongly consolidated to NVIDIA,” according to the State of AI compute report. As of December, more than 21,000 open-source AI papers said they used Nvidia chips.

Most researchers included in the State of AI Compute Index used the V100, Nvidia’s chip that came out in 2017, but A100 grew fast in 2022 to be the third-most used Nvidia chip, just behind a $1500-or-less consumer graphics chip originally intended for gaming.

The A100 also has the distinction of being one of only a few chips to have export controls placed on it because of national defense reasons. Last fall, Nvidia said in an SEC filing that the U.S. government imposed a license requirement barring the export of the A100 and the H100 to China, Hong Kong, and Russia.

“The USG indicated that the new license requirement will address the risk that the covered products may be used in, or diverted to, a ‘military end use’ or ‘military end user’ in China and Russia,” Nvidia said in its filing. Nvidia previously said it adapted some of its chips for the Chinese market to comply with U.S. export restrictions.

The fiercest competition for the A100 may be its successor. The A100 was first introduced in 2020, an eternity ago in chip cycles. The H100, introduced in 2022, is starting to be produced in volume — in fact, Nvidia recorded more revenue from H100 chips in the quarter ending in January than the A100, it said on Wednesday, although the H100 is more expensive per unit.

The H100, Nvidia says, is the first one of its data center GPUs to be optimized for transformers, an increasingly important technique that many of the latest and top AI applications use. Nvidia said on Wednesday that it wants to make AI training over 1 million percent faster. That could mean that, eventually, AI companies wouldn’t need so many Nvidia chips.

Continue Reading

Technology

Google hires Windsurf CEO Varun Mohan, others in latest AI talent deal

Published

on

By

Google hires Windsurf CEO Varun Mohan, others in latest AI talent deal

Chief executive officer of Google Sundar Pichai.

Marek Antoni Iwanczuk | Sopa Images | Lightrocket | Getty Images

Google on Friday made the latest a splash in the AI talent wars, announcing an agreement to bring in Varun Mohan, co-founder and CEO of artificial intelligence coding startup Windsurf.

As part of the deal, Google will also hire other senior Windsurf research and development employees. Google is not investing in Windsurf, but the search giant will take a nonexclusive license to certain Windsurf technology, according to a person familiar with the matter. Windsurf remains free to license its technology to others.

“We’re excited to welcome some top AI coding talent from Windsurf’s team to Google DeepMind to advance our work in agentic coding,” a Google spokesperson wrote in an email. “We’re excited to continue bringing the benefits of Gemini to software developers everywhere.”

The deal between Google and Windsurf comes after the AI coding startup had been in talks with OpenAI for a $3 billion acquisition deal, CNBC reported in April. OpenAI did not immediately respond to a request for comment.

The move ratchets up the talent war in AI particularly among prominent companies. Meta has made lucrative job offers to several employees at OpenAI in recent weeks. Most notably, the Facebook parent added Scale AI founder Alexandr Wang to lead its AI strategy as part of a $14.3 billion investment into his startup. 

Douglas Chen, another Windsurf co-founder, will be among those joining Google in the deal, Jeff Wang, the startup’s new interim CEO and its head of business for the past two years, wrote in a post on X.

“Most of Windsurf’s world-class team will continue to build the Windsurf product with the goal of maximizing its impact in the enterprise,” Wang wrote.

Windsurf has become more popular this year as an option for so-called vibe coding, which is the process of using new age AI tools to write code. Developers and non-developers have embraced the concept, leading to more revenue for Windsurf and competitors, such as Cursor, which OpenAI also looked at buying. All the interest has led investors to assign higher valuations to the startups.

This isn’t the first time Google has hired select people out of a startup. It did the same with Character.AI last summer. Amazon and Microsoft have also absorbed AI talent in this fashion, with the Adept and Inflection deals, respectively.

Microsoft is pushing an agent mode in its Visual Studio Code editor for vibe coding. In April, Microsoft CEO Satya Nadella said AI is composing as much of 30% of his company’s code.

The Verge reported the Google-Windsurf deal earlier on Friday.

WATCH: Google pushes “AI Mode” on homepage

Google pushes "AI Mode" on homepage

Continue Reading

Technology

Nvidia’s Jensen Huang sells more than $36 million in stock, catches Warren Buffett in net worth

Published

on

By

Nvidia's Jensen Huang sells more than  million in stock, catches Warren Buffett in net worth

Jensen Huang, CEO of Nvidia, holds a motherboard as he speaks during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, on June 11, 2025.

Gonzalo Fuentes | Reuters

Nvidia CEO Jensen Huang unloaded roughly $36.4 million worth of stock in the leading artificial intelligence chipmaker, according to a U.S. Securities and Exchange Commission filing.

The sale, which totals 225,000 shares, comes as part of Huang’s previously adopted plan in March to unload up to 6 million shares of Nvidia through the end of the year. He sold his first batch of stock from the agreement in June, equaling about $15 million.

Last year, the tech executive sold about $700 million worth of shares as part of a prearranged plan. Nvidia stock climbed about 1% Friday.

Huang’s net worth has skyrocketed as investors bet on Nvidia’s AI dominance and graphics processing units powering large language models.

The 62-year-old’s wealth has grown by more than a quarter, or about $29 billion, since the start of 2025 alone, based on Bloomberg’s Billionaires Index. His net worth last stood at $143 billion in the index, putting him neck-and-neck with Berkshire Hathaway‘s Warren Buffett at $144 billion.

Shortly after the market opened Friday, Fortune‘s analysis of net worth had Huang ahead of Buffett, with the Nvidia CEO at $143.7 billion and the Oracle of Omaha at $142.1 billion.

Read more CNBC tech news

The company has also achieved its own notable milestones this year, as it prospers off the AI boom.

On Wednesday, the Santa Clara, California-based chipmaker became the first company to top a $4 trillion market capitalization, beating out both Microsoft and Apple. The chipmaker closed above that milestone Thursday as CNBC reported that the technology titan met with President Donald Trump.

Brooke Seawell, venture partner at New Enterprise Associates, sold about $24 million worth of Nvidia shares, according to an SEC filing. Seawell has been on the company’s board since 1997, according to the company.

Huang still holds more than 858 million shares of Nvidia, both directly and indirectly, in different partnerships and trusts.

WATCH: Nvidia hits $4 trillion in market cap milestone despite curbs on chip exports

Nvidia hits $4 trillion in market cap milestone despite curbs on chip exports

Continue Reading

Technology

Tesla to officially launch in India with planned showroom opening

Published

on

By

Tesla to officially launch in India with planned showroom opening

Elon Musk meets with Indian Prime Minister Narendra Modi at Blair House in Washington DC, USA on February 13, 2025.

Anadolu | Anadolu | Getty Images

Tesla will open a showroom in Mumbai, India next week, marking the U.S. electric carmakers first official foray into the country.

The one and a half hour launch event for the Tesla “Experience Center” will take place on July 15 at the Maker Maxity Mall in Bandra Kurla Complex in Mumbai, according to an event invitation seen by CNBC.

Along with the showroom display, which will feature the company’s cars, Tesla is also likely to officially launch direct sales to Indian customers.

The automaker has had its eye on India for a while and now appears to have stepped up efforts to launch locally.

In April, Tesla boss Elon Musk spoke with Indian Prime Minister Narendra Modi to discuss collaboration in areas including technology and innovation. That same month, the EV-maker’s finance chief said the company has been “very careful” in trying to figure out when to enter the market.

Tesla has no manufacturing operations in India, even though the country’s government is likely keen for the company to establish a factory. Instead the cars sold in India will need to be imported from Tesla’s other manufacturing locations in places like Shanghai, China, and Berlin, Germany.

As Tesla begins sales in India, it will come up against challenges from long-time Chinese rival BYD, as well as local player Tata Motors.

One potential challenge for Tesla comes by way of India’s import duties on electric vehicles, which stand at around 70%. India has tried to entice investment in the country by offering companies a reduced duty of 15% if they commit to invest $500 million and set up manufacturing locally.

HD Kumaraswamy, India’s minister for heavy industries, told reporters in June that Tesla is “not interested” in manufacturing in the country, according to a Reuters report.

Tesla is looking to recruit roles in Mumbai, job listings posted on LinkedIn . These include advisors working in showrooms, security, vehicle operators to collect data for its Autopilot feature and service technicians.

There are also roles being advertised in the Indian capital of New Delhi, including for store managers. It’s unclear if Tesla is planning to launch a showroom in the city.

Continue Reading

Trending