Nvidia CEO Jensen Huang attends the “Winning the AI Race” Summit in Washington D.C., U.S., July 23, 2025.
Kent Nishimura | Reuters
It’s been two years since the explosion of generative artificial intelligence started to transform Nvidia’s business. Since then, the chipmaker’s revenue has more than tripled and profits have quadrupled.
Nvidia‘s fiscal second-quarter earnings report, scheduled for Wednesday, will mark the second anniversary of growth, as the company shifted from being known as a maker of gaming chips to its current position at the heart of the technology industry.
Last month, Nvidia became the first company to hit a $4 trillion market cap, and it’s continued to appreciate in value. Since the end of 2022, around the time OpenAI launched ChatGPT and sparked the generative AI boom, Nvidia’s stock price is up twelvefold. It’s up 33% this year, closing on Friday at $177.99.
Growth is still substantial for a company Nvidia’s size, but it has slowed dramatically. After five straight quarters of triple-digit expansion in 2023 and 2024, revenue growth dipped to 69% in the fiscal first quarter this year. Nvidia is expected to report a year-over-year jump of 53% to $45.9 billion in its second-quarter report, according to LSEG’s consensus of analyst estimates.
Data center revenue in the first quarter accounted for 88% of Nvidia’s total sales, the clearest sign of how significant AI has become to its business. The company said that 34% of total sales last year came from three unnamed customers. Analysts say Nvidia’s top end users are major internet companies and cloud providers such as Microsoft, Google, Amazon and Meta.
“The assumptions and performance of Nvidia really dictates what the market is going to start to price into the AI trade, and that whole AI trade has essentially been driving the market this past year,” said Melissa Otto, head of Visible Alpha Research at S&P Global, which aggregates Wall Street research.
Nvidia makes up about 7.5% of the S&P 500.
Tech’s megacap companies, other than Nvidia, reported quarterly results in late July, updating Wall Street on their investment plans. In all, they’re looking to spend roughly $320 billion on AI technology and data center buildouts this year.
OpenAI, which is still private but has a valuation in the hundreds of billions of dollars, says it will team up with SoftBank and Oracle to spend $500 billion over the next four years on the Stargate project, which President Donald Trump announced in January.
Jensen Huang, co-founder and CEO of Nvidia, displays the new Blackwell GPU chip during the Nvidia GPU Technology Conference in San Jose, California, on March 18, 2024.
David Paul Morris/Bloomberg via Getty Images
Analysts say about half of AI capital spending ends up with Nvidia. The company’s reliance on the so-called hyperscalers leaves it vulnerable to changes in the macroeconomic environment and in the artificial intelligence industry, which remains hard to predict.
OpenAI CEO Sam Altman said last week that he believes “investors as a whole are overexcited about AI,” and even said it could be a “bubble.”
But don’t expect a pullback yet. OpenAI CFO Sarah Friar told CNBC on Wednesday that the company “constantly” doesn’t have enough computing power.
As always, Wall Street will be paying close attention to Nvidia’s guidance and other forward-looking commentary from CEO Jensen Huang. For the fiscal third quarter, analysts are expecting revenue growth of 50% to $52.7 billion, according to LSEG. If Nvidia guides higher and tops estimates for the second quarter, analysts say that kind of “beat and raise” could drive AI optimism even higher.
Blackwell ramp
The most important offering from Nvidia is its Blackwell line, which includes individual graphics processing units and entire systems tying together 72 GPUs.
Strong Blackwell numbers would affirm Nvidia’s continuing technological lead and foothold among its key customers, said Ryuta Makino, an analyst at Gamco Investors, which has a stake in the company.
“It solidifies that hyperscaler spending is still very strong with the Blackwell ramp,” Makino said.
Nvidia said in May that its new product line reached $27 billion in sales, accounting for about 70% of data center revenue. That’s a steep increase from $11 billion in the prior quarter.
As more Blackwell chips get installed, experts anticipate that their superior computing power will enable companies like OpenAI and Anthropic to create even more capable AI models. OpenAI’s GPT-5, which was announced earlier this month, was trained on Nvidia’s last-generation Hopper chips, not the newer Blackwell processors.
Nvidia said last year that Blackwell would be limited by supply — how many chips its partners can build and ship — and not by demand.
Blackwell Ultra is expected to start shipping in the second half of 2025. Nvidia recently pushed back on an analyst report from Asia that said Rubin, the chip technology expected to comprise the bulk of GPU sales in 2027, was seeing early production problems.
One visible sign of Nvidia’s rise is Huang’s worldwide fame. He’s regularly name-checked by Trump and during the quarter traveled to meet with business leaders and officials in Taiwan, China, Germany, England and Saudi Arabia.
Huang recently struck a deal with Trump to regain access to the Chinese market. Nvidia will pay 15% of its China chip revenue to the U.S. government in exchange for licenses to export its China-focused AI chip called the H20, Trump said this month. The president added that he’d asked for 20%, but Huang bargained him down.
The H20 is worth a lot to Nvidia. The chip would have contributed about $8 billion in sales in the second quarter, Nvidia said in May, before the U.S. government said it would require a license to ship it to China, effectively shutting off sales.
Nvidia did not include any H20 sales in its guidance for the second quarter, and analysts doubt that it will include any in its forecast for the current period, partially because the Chinese government is pressuring its cloud providers to use homegrown chips from companies such as Huawei.
If H20 is included in guidance, it could boost revenue expectations by about $2 billion to $3 billion, according to analysts at KeyBanc, who recommend buying the stock. But they said they expect Nvidia to completely exclude it, following Advanced Micro Devices’ lead from early August.
“Additionally, given a potential 15% tax on AI exports and pressure from the China government for its AI providers to use domestic AI chips, we expect management to guide conservatively,” the KeyBanc analysts wrote.
Nvidia is working on a new China AI chip based on Blackwell that would also likely need the president’s approval.
“I’m sure he’s pitching the president all the time,” Commerce Secretary Howard Lutnick said about Huang last week on CNBC’s “Squawk on the Street.”
A man walks past a logo of SK Hynix at the lobby of the company’s Bundang office in Seongnam on January 29, 2021.
Jung Yeon-Je | AFP | Getty Images
South Korea’s SK Hynix on Wednesday posted record quarterly revenue and profit, boosted by a strong demand for its high bandwidth memory used in generative AI chipsets.
Here are SK Hynix’s third-quarter results versus LSEG SmartEstimates, which are weighted toward forecasts from analysts who are more consistently accurate:
Revenue: 24.45 trillion won ($17.13 billion) vs. 24.73 trillion won
Operating profit: 11.38 trillion won vs. 11.39 trillion won
Revenue rose about 39% in the September quarter compared with the same period a year earlier, while operating profit surged 62%, year on year.
On a quarter-on-quarter basis, revenue was up 10%, while operating profit grew 24%.
SK Hynix makes memory chips that are used to store data and can be found in everything from servers to consumer devices such as smartphones and laptops.
The company has benefited from a boom in artificial intelligence as a key supplier of high-bandwidth memory or HBM chips used to power AI data center servers.
“As demand across the memory segment has soared due to customers’ expanding investments in AI infrastructure, SK Hynix once again surpassed the record-high performance of the previous quarter due to increased sales of high value-added products,” SK Hynix said in its earnings release.
HBM falls into the broader category of dynamic random access memory, or DRAM — a type of semiconductor memory used to store data and program code that can be found in PCs, workstations and servers.
SK Hynix has set itself apart in the DRAM market by getting an early lead in HBM and establishing itself as the main supplier to the world’s leading AI chip designer, Nvidia.
However, its main competitors, U.S.-based Micron and South Korean-based tech giant Samsung, have been working to catch up in the space.
“With the innovation of AI technology, the memory market has shifted to a new paradigm and demand has begun to spread to all product areas,” SK Hynix Chief Financial Officer Kim Woohyun said in the earnings release.
“We will continue to strengthen our AI memory leadership by responding to customer demand through market-leading products and differentiated technological capabilities,” he added.
The HBM market is expected to continue to boom over the next few years to around $43 billion by 2027, giving strong earnings leverage to memory manufacturers such as SK Hynix, MS Hwang, research director at Counterpoint Research, told CNBC.
“[F]or SK Hynix to continue generating profits, it’ll be important for the company to maintain and enhance its competitive edge,” he added.
A report from Counterpoint Research earlier this month showed that SK Hynix held a leading 38% share of the DRAM market by revenue in the second quarter of the year, increasing its shares after having overtaken Samsung in the first quarter.
The report added that the global HBM market grew 178% year over year in the second quarter, and SK Hynix dominated the space with a 64% share.
Celestica CEO Rob Mionis explained how his company designs and manufactures infrastructure that enables artificial intelligence in a Tuesday interview with CNBC’s Jim Cramer.
“If AI is a speeding freight train, we’re laying the tracks ahead of the freight train,” Mionis said.
He pushed back against the notion that the AI boom is a bubble, saying that the technology has gone from a “nice to have” to a “must have.”
Celestica reported earnings Monday after close, managing to beat estimates and raise its full-year outlook. The stock hit a 52-week high during Tuesday’s session and closed up more than 8%. Celestica has had a huge run over the past several months, and shares are currently up 253.68% year-to-date.
Mionis described some of Celestica’s business strategies, including how the Canadian outfit chose to move away from commodity markets and into design and manufacturing. He told Cramer that choice “has paid off in spades” for his company.
Celestica’s focus on design and manufacturing enables the company to “consistently execute at scale,” he added.
He detailed Celestica’s data center work, saying the company makes high-speed networking and storage system for hyperscalers, digital native companies and other enterprise names.
Mionis praised the company’s partnership with semiconductor maker Broadcom, saying Celestica uses Broadcom’s silicon in a lot of its designs.
“What it means for us is when they launch a new piece of silicon — so the Tomahawk 6 is their 1.6 terabyte silicon — when they launch that into the marketplace, they’ll work with us to develop products, and those products end up in the major hyperscalers.”
Jim Cramer’s Guide to Investing
Sign up now for the CNBC Investing Club to follow Jim Cramer’s every move in the market.
Disclaimer The CNBC Investing Club Charitable Trust owns shares of Broadcom.
Elon Musk‘s Wikipedia rival Grokipedia got off to a “rocky start” in its public debut, but Wikipedia founder Jimmy Wales didn’t even have to take a look at the AI’s output to know what he expected.
“I’m not optimistic he will create anything very useful right now,” Wales said at the CNBC Technology Executive Council Summit in New York City on Tuesday.
Wales had plenty of choice words for Musk, notably in response to allegations that there is “woke bias” on Wikipedia. “He is mistaken about that,” Wales said. “His complaints about Wiki are that we focus on mainstream sources and I am completely unapologetic about that. We don’t treat random crackpots the same as The New England Journal of Medicine and that doesn’t make us woke,” he said at the CNBC event. “It’s a paradox. We are so radical we quote The New York Times.”
“I haven’t had the time to really look at Grokipedia, and it will be interesting to see, but apparently it has a lot of praise about the genius of Elon Musk in it. So I’m sure that’s completely neutral,” he added.
Wales’ digs at Grokipedia — which has its own wiki page — were less about any ongoing spat with Musk and more about his significant concerns about the efforts by all large language models to create a trusted online source of information.
“The LLMs he is using to write it are going to make massive errors,” Wales said. “We know ChatGPT and all the other LLMs are not good enough to write wiki entries.”
Musk seems equally certain of the opposite outcome: “Grokipedia will exceed Wikipedia by several orders of magnitude in breadth, depth and accuracy,” he wrote in a post on Tuesday night.
Wales gave several real-world examples of why he doesn’t have faith in LLMs to recreate what Wikipedia’s global community has built over decades at a fraction of the cost — he estimated the organization’s hard technology costs as $175 million annually versus the tens of billions of dollars big tech companies are constantly pouring into AI efforts, and by one Wall Street estimate, a total of $550 billion in AI spending expected by the so-called hyperscalers next year.
One example Wales cited of LLM’s inaccuracy relates to his wife. Wales said he often asks new chatbot models to research obscure topics as a test of their abilities, and asking who his wife is, a “not famous but known” person, he said, who worked in British politics, always results in a “plausible but wrong” answer. Any time you ask an LLM to dig deep, Wales added, “it’s a mess.”
He also gave the example of a German Wiki community member who wrote a program to verify the ISBN numbers of books cited, and was able to trace notable mistakes to one person. That person ultimately confessed they had used ChatGPT to find citations for text references and the LLM “just very happily makes up books for you,” Wales said.
Wales did say the battles into which he has been drawn, by Musk and by AI, do reinforce a serious message for Wikipedia. “It’s really important for us and the Wiki community to respond to criticism like that by doubling down on being neutral and being really careful about sources,” he said. “We shouldn’t be ‘wokepedia.’ That’s not who we should be or what people want from us. It would undermine trust.”
Wales thinks the public and the media often give Wikipedia too much credit. In its early days, he says, the site was never as bad as the jokes made about it. But now, he says, “We are not as good as they think we are. Of course, we are a lot better than we used to be, but there is still so much work to do.”
And he expects the challenges from technology, and from misinformation, to get worse, with the ability to use LLMs to create fake websites with plausible text getting better and likely able to fool the public. But he says they will have a hard time fooling the Wiki community, which has spent 25 years studying and debating trusted information sources. “But it will fool a lot of people and that is a problem,” he said.
In some cases, this same new technology, which “makes stuff up that is completely useless,” may be useful to Wikipedia, he said. Wales has been doing some work on finding limited domains where AI can uncover additional information in existing sources that should be added to a wiki, a use of gen AI he described as currently being “kind of okay.”
“Maybe it helps us do our work faster,” he said. That feedback loop could be very useful for the site if it developed its own LLM that it could train, but the costs associated with that have led the site to hold off any formal effort while it continues to test the technology, he added.
“We are really happy Wiki is now part of the infrastructure of the world, which is a pretty heavy burden on us. So when people say we’ve gotten biased, we need to take that seriously and work on anything related to it,” Wales said.
But he couldn’t resist putting that another way, too: “We talk about errors that ChatGPT makes. Just imagine an AI solely trained on Twitter. That would be a mad, angry AI trained on nonsense,” Wales said.