Connect with us

Published

on

Nvidia CEO Jensen Huang speaks during a press conference at The MGM during CES 2018 in Las Vegas on January 7, 2018.

Mandel Ngan | AFP | Getty Images

Software that can write passages of text or draw pictures that look like a human created them has kicked off a gold rush in the technology industry.

Companies like Microsoft and Google are fighting to integrate cutting-edge AI into their search engines, as billion-dollar competitors such as OpenAI and Stable Diffusion race ahead and release their software to the public.

Powering many of these applications is a roughly $10,000 chip that’s become one of the most critical tools in the artificial intelligence industry: The Nvidia A100.

The A100 has become the “workhorse” for artificial intelligence professionals at the moment, said Nathan Benaich, an investor who publishes a newsletter and report covering the AI industry, including a partial list of supercomputers using A100s. Nvidia takes 95% of the market for graphics processors that can be used for machine learning, according to New Street Research.

A.I. is the catalyst behind Nividia's earnings beat, says Susquehanna's Christopher Rolland

The A100 is ideally suited for the kind of machine learning models that power tools like ChatGPT, Bing AI, or Stable Diffusion. It’s able to perform many simple calculations simultaneously, which is important for training and using neural network models.

The technology behind the A100 was initially used to render sophisticated 3D graphics in games. It’s often called a graphics processor, or GPU, but these days Nvidia’s A100 is configured and targeted at machine learning tasks and runs in data centers, not inside glowing gaming PCs.

Big companies or startups working on software like chatbots and image generators require hundreds or thousands of Nvidia’s chips, and either purchase them on their own or secure access to the computers from a cloud provider.

Hundreds of GPUs are required to train artificial intelligence models, like large language models. The chips need to be powerful enough to crunch terabytes of data quickly to recognize patterns. After that, GPUs like the A100 are also needed for “inference,” or using the model to generate text, make predictions, or identify objects inside photos.

This means that AI companies need access to a lot of A100s. Some entrepreneurs in the space even see the number of A100s they have access to as a sign of progress.

“A year ago we had 32 A100s,” Stability AI CEO Emad Mostaque wrote on Twitter in January. “Dream big and stack moar GPUs kids. Brrr.” Stability AI is the company that helped develop Stable Diffusion, an image generator that drew attention last fall, and reportedly has a valuation of over $1 billion.

Now, Stability AI has access to over 5,400 A100 GPUs, according to one estimate from the State of AI report, which charts and tracks which companies and universities have the largest collection of A100 GPUs — although it doesn’t include cloud providers, which don’t publish their numbers publicly.

Nvidia’s riding the A.I. train

Nvidia stands to benefit from the AI hype cycle. During Wednesday’s fiscal fourth-quarter earnings report, although overall sales declined 21%, investors pushed the stock up about 14% on Thursday, mainly because the company’s AI chip business — reported as data centers — rose by 11% to more than $3.6 billion in sales during the quarter, showing continued growth.

Nvidia shares are up 65% so far in 2023, outpacing the S&P 500 and other semiconductor stocks alike.

Nvidia CEO Jensen Huang couldn’t stop talking about AI on a call with analysts on Wednesday, suggesting that the recent boom in artificial intelligence is at the center of the company’s strategy.

“The activity around the AI infrastructure that we built, and the activity around inferencing using Hopper and Ampere to influence large language models has just gone through the roof in the last 60 days,” Huang said. “There’s no question that whatever our views are of this year as we enter the year has been fairly dramatically changed as a result of the last 60, 90 days.”

Ampere is Nvidia’s code name for the A100 generation of chips. Hopper is the code name for the new generation, including H100, which recently started shipping.

More computers needed

Nvidia A100 processor

Nvidia

Compared to other kinds of software, like serving a webpage, which uses processing power occasionally in bursts for microseconds, machine learning tasks can take up the whole computer’s processing power, sometimes for hours or days.

This means companies that find themselves with a hit AI product often need to acquire more GPUs to handle peak periods or improve their models.

These GPUs aren’t cheap. In addition to a single A100 on a card that can be slotted into an existing server, many data centers use a system that includes eight A100 GPUs working together.

This system, Nvidia’s DGX A100, has a suggested price of nearly $200,000, although it comes with the chips needed. On Wednesday, Nvidia said it would sell cloud access to DGX systems directly, which will likely reduce the entry cost for tinkerers and researchers.

It’s easy to see how the cost of A100s can add up.

For example, an estimate from New Street Research found that the OpenAI-based ChatGPT model inside Bing’s search could require 8 GPUs to deliver a response to a question in less than one second.

At that rate, Microsoft would need over 20,000 8-GPU servers just to deploy the model in Bing to everyone, suggesting Microsoft’s feature could cost $4 billion in infrastructure spending.

“If you’re from Microsoft, and you want to scale that, at the scale of Bing, that’s maybe $4 billion. If you want to scale at the scale of Google, which serves 8 or 9 billion queries every day, you actually need to spend $80 billion on DGXs.” said Antoine Chkaiban, a technology analyst at New Street Research. “The numbers we came up with are huge. But they’re simply the reflection of the fact that every single user taking to such a large language model requires a massive supercomputer while they’re using it.”

The latest version of Stable Diffusion, an image generator, was trained on 256 A100 GPUs, or 32 machines with 8 A100s each, according to information online posted by Stability AI, totaling 200,000 compute hours.

At the market price, training the model alone cost $600,000, Stability AI CEO Mostaque said on Twitter, suggesting in a tweet exchange the price was unusually inexpensive compared to rivals. That doesn’t count the cost of “inference,” or deploying the model.

Huang, Nvidia’s CEO, said in an interview with CNBC’s Katie Tarasov that the company’s products are actually inexpensive for the amount of computation that these kinds of models need.

“We took what otherwise would be a $1 billion data center running CPUs, and we shrunk it down into a data center of $100 million,” Huang said. “Now, $100 million, when you put that in the cloud and shared by 100 companies, is almost nothing.”

Huang said that Nvidia’s GPUs allow startups to train models for a much lower cost than if they used a traditional computer processor.

“Now you could build something like a large language model, like a GPT, for something like $10, $20 million,” Huang said. “That’s really, really affordable.”

New competition

Nvidia isn’t the only company making GPUs for artificial intelligence uses. AMD and Intel have competing graphics processors, and big cloud companies like Google and Amazon are developing and deploying their own chips specially designed for AI workloads.

Still, “AI hardware remains strongly consolidated to NVIDIA,” according to the State of AI compute report. As of December, more than 21,000 open-source AI papers said they used Nvidia chips.

Most researchers included in the State of AI Compute Index used the V100, Nvidia’s chip that came out in 2017, but A100 grew fast in 2022 to be the third-most used Nvidia chip, just behind a $1500-or-less consumer graphics chip originally intended for gaming.

The A100 also has the distinction of being one of only a few chips to have export controls placed on it because of national defense reasons. Last fall, Nvidia said in an SEC filing that the U.S. government imposed a license requirement barring the export of the A100 and the H100 to China, Hong Kong, and Russia.

“The USG indicated that the new license requirement will address the risk that the covered products may be used in, or diverted to, a ‘military end use’ or ‘military end user’ in China and Russia,” Nvidia said in its filing. Nvidia previously said it adapted some of its chips for the Chinese market to comply with U.S. export restrictions.

The fiercest competition for the A100 may be its successor. The A100 was first introduced in 2020, an eternity ago in chip cycles. The H100, introduced in 2022, is starting to be produced in volume — in fact, Nvidia recorded more revenue from H100 chips in the quarter ending in January than the A100, it said on Wednesday, although the H100 is more expensive per unit.

The H100, Nvidia says, is the first one of its data center GPUs to be optimized for transformers, an increasingly important technique that many of the latest and top AI applications use. Nvidia said on Wednesday that it wants to make AI training over 1 million percent faster. That could mean that, eventually, AI companies wouldn’t need so many Nvidia chips.

Continue Reading

Technology

Software startup deploys Singapore’s first quantum computer for commercial use

Published

on

By

Software startup deploys Singapore’s first quantum computer for commercial use

Inside Horizon Quantum’s office in Singapore on Dec. 3, 2025. The software firm claimed it is the first private company to deploy a commercial quantum computer in the city-state.

Sha Ying | CNBC International

Singapore-based software firm Horizon Quantum on Wednesday said it has become the first private company to run a quantum computer for commercial use in the city-state, marking a milestone ahead of its plans to list in the U.S.

The start-up, founded in 2018 by quantum researcher Joe Fitzsimons, said the machine is now fully operational. It integrates components from quantum computing suppliers, including Maybell Quantum, Quantum Machines and Rigetti Computing.

According to Horizon Quantum, the new computer also makes it the first pure-play quantum software firm to own its own quantum computer — an integration it hopes will help advance the promising technology.

“Our focus is on helping developers to start harnessing quantum computers to do real-world work,” Fitzsimons, the CEO, told CNBC. “How do we take full advantage of these systems? How do we program them?” 

Horizon Quantum builds the software tools and infrastructure needed to power applications for quantum computing systems. 

“Although we’re very much focused on the software side, it’s really important to understand how the stack works down to the physical level … that’s the reason we have a test bed now,” Fitzsimons said. 

Quantum race

Horizon Quantum hopes to use its new hardware to accelerate the development of real-world quantum applications across industries, from pharmaceuticals to finance.

Quantum systems aim to tackle problems too complex for traditional machines by leveraging principles of quantum mechanics.

For example, designing new drugs, which requires simulating molecular interactions, or running millions of scenarios to assess portfolio risk, can be slow and computationally costly for conventional machines. Quantum computing is expected to provide faster, more accurate models to tackle these problems.

A top executive at Google working on quantum computers told CNBC in March that he believes the technology is only five years away from running practical applications.

Still, today’s quantum systems remain in the nascent stages of development and pose many engineering and programming challenges.

Investment in the space has been rising, however, as major tech companies report technological breakthroughs. Alphabet, Microsoft, Amazon and IBM, along with the U.S. government, are already pouring millions into quantum computing.

Investor attention also received a bump in June after Nvidia chief executive Jensen Huang offered upbeat remarks, saying quantum computing is nearing an “inflection point” and that practical uses may arrive sooner than he had expected.

Nvidia CEO: Quantum computing is reaching an inflection point

Nasdaq listing

Horizon Quantum’s announcement comes ahead of a merger with dMY Squared Technology Group Inc., a special purpose acquisition company. The deal, agreed upon in September, aims to take Horizon public on the Nasdaq under the ticker “HQ.”

The software firm said in September that the transaction valued the company at around $503 million and was expected to close in the first quarter of 2026. 

The launch of its quantum computer also helps cement Singapore’s ambition to be a regional quantum computing hub. The city-state has invested heavily in the technology for years, setting up its first quantum research center in 2007.

Before Horizon Quantum’s system came online, Singapore reportedly had one quantum computer, used primarily for research purposes. Meanwhile, U.S.-based firm Quantinuum plans to deploy another commercial system in 2026.

Singapore’s National Quantum Strategy, unveiled in May 2024, committed 300 million Singapore dollars over five years to expand the sector, with a significant portion directed toward building local quantum computer processors.  

In May 2024, the National Quantum Strategy (NQS), Singapore’s national quantum initiative, pledged around S$300 million over five years to strengthen development in the sector, with a significant portion directed toward building local quantum computer processors.

Why Amazon, Google, Microsoft, IBM and numerous startups are racing to build quantum computers

Continue Reading

Technology

A little-known startup just used AI to make a moon dust battery for Blue Origin

Published

on

By

A little-known startup just used AI to make a moon dust battery for Blue Origin

Istari Digital CEO Will Roper talks about the AI technology that built the Blue Origin moon vacuum

Artificial intelligence has created a device that turns moon dust into energy.

The moon vacuum, which was unveiled on Wednesday by Blue Origin at Amazon‘s re:Invent 2025 conference in Las Vegas, was built using critical technology from startup Istari Digital.

“So what it does is sucks up moon dust and it extracts the heat from it so it can be used as an energy source, like turning moon dust into a battery,” Istari CEO Will Roper told CNBC’s Morgan Brennan.

Spacecraft carrying out missions on the lunar surface are typically constrained by lunar night, the two-week period every 28 days during which the moon is cast in darkness and temperatures experience extreme drops, crippling hardware and rendering it useless unless a strong, long-lasting power source is present.

“Kind of like vacuuming at home, but creating your own electricity while you do it,” he added.

The battery was completely designed by AI, said Roper, who was assistant secretary of the Air Force under President Donald Trump‘s first term and is known for transforming the acquisition process at both the Air Force and, at the time, the newly created Space Force.

Read more CNBC tech news

A major part of the breakthrough in Istari’s technology is the way in which it handles and limits AI hallucinations.

Roper said the platform takes all the requirements a part needs and creates guardrails or a “fence around the playground” that the AI can’t leave while coming up with designs.

“Within that playground, AI can generate to its heart’s content,” he said.

“In the case of Blue Origin’s moon battery, [it] doesn’t tell you the design was a good one, but it tells us that all of the requirements were met, the standards were met, things like that that you got to check before you go operational,” he added.

Istari is backed by former Google CEO Eric Schmidt and already works with the U.S. government, including as a prime contractor with Lockheed Martin on the experimental x-56A unmanned aircraft.

Watch the full interview above and go deeper into the business of the stars with the Manifest Space podcast.

X-Energy’s Kam Ghaffarian on Nuclear Power, AI, and the Space Tech Race

Continue Reading

Technology

Nvidia CEO Jensen Huang talks chip restrictions with Trump, blasts state-by-state AI regulations

Published

on

By

Nvidia CEO Jensen Huang talks chip restrictions with Trump, blasts state-by-state AI regulations

Jensen Huang: State-by-state AI regulation would drag industry to a halt

Nvidia CEO Jensen Huang said he met with President Donald Trump on Wednesday and that the two men discussed chip export restrictions, as lawmakers consider a proposal to limit exports of advanced artificial intelligence chips to nations like China.

“I’ve said it repeatedly that we support export controls, and that we should ensure that American companies have the best and the most and first,” Huang told reporters on Capitol Hill.

Lawmakers were considering including the Guaranteeing Access and Innovation for National Artificial Intelligence Act in a major defense package, known as the National Defense Authorization Act. The GAIN AI Act would require chipmakers like Nvidia and Advanced Micro Devices to give U.S. companies first pick on their AI chips before selling them in countries like China.

The proposal isn’t expected to be part of the NDAA, Bloomberg reported, citing a person familiar with the matter.

Huang said it was “wise” that the proposal is being left out of the annual defense policy bill.

“The GAIN AI Act is even more detrimental to the United States than the AI Diffusion Act,” Huang said.

Nvidia’s CEO also criticized the idea of establishing a patchwork of state laws regulating AI. The notion of state-by-state regulation has generated pushback from tech companies and spurred the creation of a super PAC called “Leading the Future,” which is backed by the AI industry.

“State-by-state AI regulation would drag this industry into a halt and it would create a national security concern, as we need to make sure that the United States advances AI technology as quickly as possible,” Huang said. “A federal AI regulation is the wisest.”

Trump last month urged legislators to include a provision in the NDAA that would preempt state AI laws in favor of “one federal standard.”

But House Majority Leader Steve Scalise (R-LA) told CNBC’s Emily Wilkins on Tuesday the provision won’t make it into the bill, citing a lack of sufficient support. He and other lawmakers will continue to look for ways to establish a national standard on AI, Scalise added.

WATCH: Nvidia currying favor to be able to sell chips in China

Nvidia obviously currying favor to be able to sell chips in China, says Niles Investment's Dan Niles

Continue Reading

Trending