Connect with us

Published

on

Nvidia CEO Jensen Huang arrives to attend the opening ceremony of Siliconware Precision Industries Co. (SPIL)’s Tan Ke Plant site in Taichung, Taiwan Jan. 16, 2025. 

Ann Wang | Reuters

Nvidia announced new chips for building and deploying artificial intelligence models at its annual GTC conference on Tuesday. 

CEO Jensen Huang revealed Blackwell Ultra, a family of chips shipping in the second half of this year, as well as Vera Rubin, the company’s next-generation graphics processing unit, or GPU, that is expected to ship in 2026.

Nvidia’s sales are up more than sixfold since its business was transformed by the release of OpenAI’s ChatGPT in late 2022. That’s because its “big GPUs” have most of the market for developing advanced AI, a process called training.

Software developers and investors are closely watching the company’s new chips to see if they offer enough additional performance and efficiency to convince the company’s biggest end customers — cloud companies including Microsoft, Google and Amazon — to continue spending billions of dollars to build data centers based around Nvidia chips.

“This last year is where almost the entire world got involved. The computational requirement, the scaling law of AI, is more resilient, and in fact, is hyper-accelerated,” Huang said.

Tuesday’s announcements are also a test of Nvidia’s new annual release cadence. The company is striving to announce new chip families on an every-year basis. Before the AI boom, Nvidia released new chip architectures every other year. 

The GTC conference in San Jose, California, is also a show of strength for Nvidia. 

The event, Nvidia’s second in-person conference since the pandemic, is expected to have 25,000 attendees and hundreds of companies discussing the ways they use the company’s hardware for AI. That includes Waymo, Microsoft and Ford, among others. General Motors also announced that it will use Nvidia’s service for its next-generation vehicles.

The chip architecture after Rubin will be named after physicist Richard Feynman, Nvidia said on Tuesday, continuing its tradition of naming chip families after scientists. Nvidia’s Feynman chips are expected to be available in 2028, according to a slide displayed by Huang.

Nvidia will also showcase its other products and services at the event. 

For example, Nvidia announced new laptops and desktops using its chips, including two AI-focused PCs called DGX Spark and DGX Station that will be able to run large AI models such as Llama or DeepSeek. The company also announced updates to its networking parts for tying hundreds or thousands of GPUs together so they work as one, as well as a software package called Dynamo that helps users get the most out of their chips.

Jensen Huang, co-founder and chief executive officer of Nvidia Corp., speaks during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Tuesday, March 18, 2025. 

David Paul Morris | Bloomberg | Getty Images

Vera Rubin

Nvidia expects to start shipping systems on its next-generation GPU family in the second half of 2026. 

The system has two main components: a CPU, called Vera, and a new GPU design, called Rubin. It’s named after astronomer Vera Rubin.

Vera is Nvidia’s first custom CPU design, the company said, and it’s based on a core design they’ve named Olympus. 

Previously when it needed CPUs, Nvidia used an off-the-shelf design from Arm. Companies that have developed custom Arm core designs, such as Qualcomm and Apple, say that they can be more tailored and unlock better performance.

The custom Vera design will be twice as fast as the CPU used in last year’s Grace Blackwell chips, the company said. 

When paired with Vera, Rubin can manage 50 petaflops while doing inference, more than double the 20 petaflops for the company’s current Blackwell chips. Rubin can also support as much as 288 gigabytes of fast memory, which is one of the core specs that AI developers watch.

Nvidia is also making a change to what it calls a GPU. Rubin is actually two GPUs, Nvidia said. 

The Blackwell GPU, which is currently on the market, is actually two separate chips that were assembled together and made to work as one chip.

Starting with Rubin, Nvidia will say that when it combines two or more dies to make a single chip, it will refer to them as separate GPUs. In the second half of 2027, Nvidia plans to release a “Rubin Next” chip that combines four dies to make a single chip, doubling the speed of Rubin, and it will refer to that as four GPUs.

Nvidia said that will come in a rack called Vera Rubin NVL144. Previous versions of Nvidia’s rack were called NVL72.

Jensen Huang, co-founder and chief executive officer of Nvidia Corp., speaks during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Tuesday, March 18, 2025. 

David Paul Morris | Bloomberg | Getty Images

Blackwell Ultra

Nvidia also announced new versions of its Blackwell family of chips that it calls Blackwell Ultra.

That chip will be able to produce more tokens per second, which means that the chip can generate more content in the same amount of time as its predecessor, the company said in a briefing.

Nvidia says that means that cloud providers can use Blackwell Ultra to offer a premium AI service for time-sensitive applications, allowing them to make as much as 50 times the revenue from the new chips as the Hopper generation, which shipped in 2023.

Blackwell Ultra will come in a version with two paired to an Nvidia Arm CPU, called GB300, and a version with just the GPU, called B300. It will also come in versions with eight GPUs in a single server blade and a rack version with 72 Blackwell chips.

The top four cloud companies have deployed three times the number of Blackwell chips as Hopper chips, Nvidia said.

DeepSeek

Nvidia kicks off its GTC Conference: The Committee debate how to trade it

Continue Reading

Technology

Hyperscaler AI spending could slow down if Oracle shows ‘discipline’

Published

on

By

Hyperscaler AI spending could slow down if Oracle shows 'discipline'

Wall St. concluded companies involved in the data center are paying too much to build: Jim Cramer

CNBC’s Jim Cramer on Tuesday proposed that action from Oracle could slow down other hyperscalers’ enormous artificial intelligence spending, saying the OpenAI partner should show “discipline.”

“Oracle already has a huge amount of debt. Their balance sheet’s not that good. At some point, they’ll heed the warning of the bond market and slow things down,” he said. “These data centers cost a fortune and even the best builders stumble…Oracle can’t risk blowing up its balance sheet for Sam Altman. That’s when and how we’re going to get out of this morass.”

Cramer named five tech behemoths engaged in massive AI spending: Amazon, Microsoft, Google, Meta and OpenAI in partnership with Oracle. These names are trying to outspend each other, building data centers wherever they can, Cramer said. He added that they’re also trying to keep rivals from encroaching on their core businesses.

This “reckless, imprudent data center spending” has sent these stocks’ valuations plummeting, Cramer said. He suggested that OpenAI “is funded by venture capitalists and the company seems willing to spend itself to death.” Other companies will try to keep up as long as the the ChatGPT maker keeps spending, Cramer continued. OpenAI has committed to spending over $300 billion over five years on Oracle’s technology, and its many commitments to other companies total close to $1.4 trillion.

But Oracle’s $18 billion bond issuance drew scrutiny across Wall Street, Cramer said, as many investors aggressively bought credit default swaps — insurance paid out if a company defaults on its obligations. If Oracle pumps the breaks on spending, competitors could follow suit and see their stocks climb, Cramer said.

“This way Oracle stays alive, and OpenAI is forced to choose which businesses it truly wants to target,” he said. “Because he who defends everything defends nothing.”

Oracle and OpenAI did not immediately respond to request for comment.

When Micron reports there will be analysts calling a top, says Jim Cramer

Jim Cramer’s Guide to Investing

Sign up now for the CNBC Investing Club to follow Jim Cramer’s every move in the market.

Disclaimer The CNBC Investing Club Charitable Trust owns shares of Amazon, Microsoft and Meta.

Questions for Cramer?
Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer’s world? Hit him up!
Mad Money TwitterJim Cramer TwitterFacebookInstagram

Questions, comments, suggestions for the “Mad Money” website? madcap@cnbc.com

Continue Reading

Technology

Tesla stock hits record as Wall Street rallies around robotaxi hype despite slow EV sales

Published

on

By

Tesla stock hits record as Wall Street rallies around robotaxi hype despite slow EV sales

Tesla CEO Elon Musk attends the Saudi-U.S. Investment Forum, in Riyadh, Saudi Arabia, May 13, 2025.

Hamad I Mohammed | Reuters

What started off as a particularly rough year for Tesla investors is turning into quite the celebration.

Following a 36% plunge in the first quarter, the stock’s worst period since 2022, Tesla shares have rallied all the way back, reaching an all-time high of $489.48. That tops its prior intraday record of $488.54 reached almost exactly a year ago.

The stock got a spark this week after CEO Elon Musk, the world’s richest person, said Tesla has been testing driverless vehicles in Austin, Texas with no occupants on board, almost six months after launching a pilot program with safety drivers.

With the rally, Tesla’s market cap climbed to $1.63 trillion, making it the seventh-most valuable publicly traded company, behind Nvidia, Apple, Alphabet, Microsoft, Amazon and Meta, and slightly ahead of Broadcom. Musk’s net worth now sits at close to $683 billion, according to Forbes, more than $400 billion ahead of Google co-founder Larry Page, who is second on the list.

Bullish investors view the news as a sign that the company will finally make good on its longtime promise to turn its existing electric vehicles into robotaxis with a software update.

Tesla’s automated driving systems being tested in Austin are not yet widely available, and a myriad of safety related questions remain.

It’s been a rollercoaster year for Tesla, which entered the year in a seemingly favorable position due to Musk’s role in President Donald Trump’s White House, running the Department of Government Efficiency, or DOGE, an effort to dramatically downsize the federal government and slash federal regulations.

However, Musk’s work with Trump, endorsements of far-right political figures around the world, and incendiary political rhetoric sparked a consumer backlash that continues to weigh on Tesla’s brand reputation and sales.

For the first quarter, Tesla reported a 13% decrease in deliveries and a 20% plunge in automotive revenue. In the second quarter, the stock rallied but the sales decline continued, with auto revenue dropping 16%.

The second half of the year has been much stronger. In October, Tesla reported a 12% increase in third-quarter revenue as buyers in the U.S. rushed to snap up EVs and take advantage of a federal tax credit that expired at the end of September. The stock jumped 40% in the period.

Business challenges remain due to the loss of the tax credit, the ongoing backlash against Musk, and strong competition from lower-cost or more appealing EVs made by companies including BYD and Xiaomi in China and Volkswagen in Europe.

While Tesla released more affordable variants of its popular Model Y SUV and Model 3 sedans in October, those haven’t helped its U.S. or European sales so far. In the U.S., the new stripped-down options appear to be cannibalizing sales of Tesla’s higher-priced models. According to Cox Automotive, Tesla’s U.S. sales dropped in November to a four-year low.

Despite a difficult environment for EV makers in the U.S., Mizuho raised its price target on Tesla this week to $530 from $475 and kept its buy recommendation on the stock. Analysts at the firm wrote that reported improvements in Tesla’s FSD, or Full Self-Driving (Supervised) technology, “could support an accelerated expansion” of its “robotaxi fleet in Austin, San Francisco, and potentially earlier elimination of the chaperone.” 

Tesla operates a Robotaxi-branded ridehailing service in Texas and California but the vehicles include drivers or human safety supervisors on board for now.

WATCH: Why speed isn’t selling EVs

Why speed isn't selling EVs

Continue Reading

Technology

What Harvard researchers learned about use of AI in white-collar work at top companies

Published

on

By

What Harvard researchers learned about use of AI in white-collar work at top companies

The Baker Library of the Harvard Business School on the Harvard University campus in Boston, Massachusetts, US, on Tuesday, May 27, 2025. Recent research conducted by the Digital Data Design Institute at Harvard Business School is investigating where AI is most effective in increasing productivity and performance — and where humans still have the upper hand.

Bloomberg | Bloomberg | Getty Images

Workplace AI adoption is at an all-time high, according to Anthropic data, but just because organizations use AI doesn’t mean it’s effective.

“Nobody knows those answers, even though a lot of people are saying they do,” said Jen Stave, chief operator at the Digital Data Design Institute (D^3) at Harvard Business School. While much of the business world tries to figure out where AI can be best deployed, the team at D^3 is researching where the technology is most effective in increasing productivity and performance — and where humans still have the upper hand.

Workplace collaboration is a long-held standard for innovation and productivity, but AI is changing what that looks like. AI-equipped individuals perform at comparable levels to teams without access to AI, D^3’s recent research in partnership with Procter & Gamble finds. “AI is capable of reproducing certain benefits typically gained through human collaboration, potentially revolutionizing how organizations structure their teams and allocate resources,” according to the research.

Think AI-enabled teams, not just AI-equipped individuals.

While AI-equipped individuals show significant improvement in factors like speed and performance, strategically curated teams with AI have their own advantages. When factoring in the quality of outcomes, the best, most innovative solutions come from AI-enabled teams. This research relies on AI tools not optimized for collaboration, but AI systems purpose-built for collaboration could further enhance these benefits. In other words, simply replacing humans with AI may not be the fix businesses hope for.

“Companies that are actually thinking through the changes in roles and where we need to not just lean into it but protect human jobs and maybe even add some in that space if that’s our competitive advantage, that, to me, is a signal of a super mature mindset around AI,” Stave said.

The D^3 experiment at P&G also shows that AI integration significantly reduces gaps that exist between an organization’s pockets of domain expertise. For example, having a knowledge base at hand could make any one team’s outputs more universally beneficial beyond sole teams like human resources, engineering and research and development.

Morgan Stanley's Stephen Byrd: No job will be unaffected by AI

Lower-level workers benefit more, but it is a double-edged sword.

Another experiment D^3 conducted with Boston Consulting Group showed AI leads to more homogenized results. “Humans have more diverse ideas, and people who use AI tend to produce more similar ideas,” Stave said, recognizing that companies with goals of standing out in the market should lean into human-led creativity.

Performers on the lower half of the skill spectrum exhibit the biggest performance gains (43%) when equipped with AI compared to performers on the top half of the skill spectrum (who get a 17% performance surge). While both outcomes are substantial, it’s the entry-level workers who get the biggest perks.

But for the less-skilled workers, it’s a double-edged sword. For instance, if AI can do junior work better, the senior-level workplace might stop delegating work to their junior counterparts, creating training deficits that negatively impact future performance. Bearing a company’s future in mind, businesses will want to carefully consider what they do and don’t delegate.

Human managers are not prepared to oversee AI agents. They need to learn

While Stave says humans serving as managers to a suite of AI agents is “absolutely going to happen,” the scaffolding to do so both effectively and with minimal adverse harm is simply not there. Stave herself has had this experience, and it contrasted with all her managerial and leadership education. “You learn how to manage according to empathy and understanding, how to make the most of human potential,” she said. “I had all these AI agents that I was personally trying to build and manage. It was a fundamentally different experience.”

Moreover, while Grammarly CEO Shishir Mehrotra said entry-level workers could be the new managers (with AI agents — not people — in their charge), the junior workforce has not actually proven to be enterprise AI-native or managerially equipped. “We want to see AI giving humans more opportunity to flourish. The challenge I have is with assuming that the junior employees are going to step in and know how to do that right away,” Stave said.

She added that the companies truly getting value from their AI deployments are the ones undertaking process redesign. Instead of relying on AI notetaking to save time, lean into where AI helps and where humans are the winners. “It’s very easy to buy a tool and implement it,” she said. “It’s really hard to actually do org redesign, because that’s when you get into all these internal empires and power struggles.”

But even so, she says, the effort is worth it.

Continue Reading

Trending