Connect with us

Published

on

Jensen Huang, co-founder and chief executive officer of Nvidia Corp., during a news conference in Taipei, Taiwan, on Tuesday, June 4, 2024. Nvidia is still working on the certification process for Samsung Electronics Co.’s high-bandwidth memory chips, a final required step before the Korean company can begin supplying a component essential to training AI platforms. 

Annabelle Chih | Bloomberg | Getty Images

Nvidia called DeepSeek’s R1 model “an excellent AI advancement,” despite the Chinese startup’s emergence causing the chip maker’s stock price to plunge 17% on Monday.

“DeepSeek is an excellent AI advancement and a perfect example of Test Time Scaling,”  an Nvidia spokesperson told CNBC on Monday. “DeepSeek’s work illustrates how new models can be created using that technique, leveraging widely-available models and compute that is fully export control compliant.”

The comments come after DeepSeek last week released R1, which is an open-source reasoning model that reportedly outperformed the best models from U.S. companies such as OpenAI. R1’s self-reported training cost was less than $6 million, which is a fraction of the billions that Silicon Valley companies are spending to build their artificial-intelligence models. 

Nvidia’s statement indicates that it sees DeepSeek’s breakthrough as creating more work for the American chip maker’s graphics processing units, or GPUs. 

Read more DeepSeek coverage

“Inference requires significant numbers of NVIDIA GPUs and high-performance networking,” the spokesperson added. “We now have three scaling laws: pre-training and post-training, which continue, and new test-time scaling.”

Nvidia also said that the GPUs that DeepSeek used were fully export compliant. That counters Scale AI CEO Alexandr Wang’s comments on CNBC last week that he believed DeepSeek used Nvidia GPUs models which are banned in mainland China. DeepSeek says it used special versions of Nvidia’s GPUs intended for the Chinese market.

Analysts are now asking if multi-billion dollar capital investments from companies like Microsoft, Google and Meta for Nvidia-based AI infrastructure are being wasted when the same results can be achieved more cheaply. 

Earlier this month, Microsoft said it is spending $80 billion on AI infrastructure in 2025 alone while Meta CEO Mark Zuckerberg last week said the social media company planned to invest between $60 to $65 billion in capital expenditures in 2025 as part of its AI strategy. 

“If model training costs prove to be significantly lower, we would expect a near-term cost benefit for advertising, travel, and other consumer app companies that use cloud AI services, while long-term hyperscaler AI-related revenues and costs would likely be lower,” wrote BofA Securities analyst Justin Post in a note on Monday.

Nvidia’s comment also reflects a new theme that Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella have discussed in recent months.

Much of the AI boom and the demand for Nvidia GPUs was driven by the “scaling law,” a concept in AI development proposed by OpenAI researchers in 2020. That concept suggested that better AI systems could be developed by greatly expanding the amount of computation and data that went into building a new model, requiring more and more chips.

Since November, Huang and Altman have been focusing on a new wrinkle to the scaling law, which Huang calls “test-time scaling.” 

This concept says that if a fully trained AI model spends more time using extra computer power when making predictions or generating text or images to allow it to “reason,” it will provided better answers than it would have if it ran for less time. 

Forms of the test-time scaling law are used in some of OpenAI’s models such as o1 as well as DeepSeek’s breakthrough R1 model.

WATCH: DeepSeek challenging sense of U.S. exceptionalism priced into markets, fund manager says

Continue Reading

Technology

Alphabet becomes fourth company to reach $3 trillion market cap

Published

on

By

Alphabet becomes fourth company to reach  trillion market cap

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California on May 20, 2025.

Camille Cohen | Afp | Getty Images

Alphabet has joined the $3 trillion club.

Shares of the search giant jumped more than 4% on Monday, pushing the company into territory occupied only by Nvidia, Microsoft and Apple.

The stock got a big lift in early September from an antitrust ruling by a judge, whose penalties came in lighter than shareholders feared. The U.S. Department of Justice wanted Google to be forced to divest its Chrome browser, and last year a district court ruled that the company held an illegal monopoly in search and related advertising.

But Judge Amit Mehta decided against the most severe consequences proposed by the DOJ, which sent shares soaring to a record. After the big rally, President Donald Trump congratulated the company and called it “a very good day.”

Read more CNBC tech news

Alphabet shares are now up more than 30% this year, compared to the 15% gain for the Nasdaq.

The $3 trillion milestone comes roughly 20 years after Google’s IPO and a little more than 10 years after the creation of Alphabet as a holding company, with Google its prime subsidiary.

CEO Sundar Pichai was named CEO of Alphabet in 2019, replacing co-founder Larry Page. Pichai’s latest challenge has been the surge of new competition due to the rise of artificial intelligence, which the company has had to manage through while also fending off an aggressive set of regulators in the U.S. and Europe.

The rise of Perplexity and OpenAI ended up helping Google land the recent favorable antitrust ruling. The company’s hopes of becoming a major AI player largely ride with Gemini, Google’s flagship suite of AI models.

WATCH: EU fines Google almost $3 billion

EU fines Google almost $3 billion over AdTech practices, reports say

Continue Reading

Technology

Bessent: TikTok deal ‘framework’ reached with China, Trump and Xi will finalize it Friday

Published

on

By

Bessent: TikTok deal 'framework' reached with China, Trump and Xi will finalize it Friday

Samuel Boivin | Nurphoto | Getty Images

The U.S. and China have reached a ‘framework’ deal for social media platform TikTok, Treasury Secretary Scott Bessent said Monday.

“It’s between two private parties, but the commercial terms have been agreed upon,” he said from U.S.-China talks in Madrid.

Both President Donald Trump and Chinese President Xi Jinping will meet Friday to discuss the terms. Trump also said in a Truth Social post Monday that a deal was reached “on a ‘certain’ company that young people in our Country very much wanted to save.”

Bessent indicated that the framework could pivot the platform to U.S.-controlled ownership.

TikTok did not immediately respond to a request for comment.

The comments came during the latest round of trade discussions between the U.S. and China. Relations have soured between the two countries in recent months from Trump’s tariffs and other trade restrictions.

At the same time, TikTok parent company ByteDance faces a Sept. 17 deadline to divest the platform’s U.S. business or face being shut down in the country.

U.S. Trade Representative Jamieson Greer said Monday that the deadline may need to be pushed back to get the deal signed, but there won’t be ongoing extensions.

Read more CNBC tech news

Congress passed a law last year prohibiting app store operators like Apple and Google from distributing TikTok in the U.S. due to its “foreign adversary-controlled application” status.

But Trump postponed the shutdown in January, signing an executive order in January that gave ByteDance 75 more days to make a deal. Further extensions came by way of executive orders in April and in June.

Commerce Secretary Howard Lutnick said in July that TikTok would shutter for Americans if China doesn’t give the U.S. more autonomy over the popular short-form video app.

As for who controls the platform, Trump told Fox News in June that he had a group of “very wealthy people” ready to buy the app and could reveal their identities in two weeks. The reveal never came.

He has previously said he’d be open to Oracle Chairman Larry Ellison or Tesla CEO Elon Musk buying TikTok in the U.S. Artificial intelligence startup Perplexity has submitted a bid for an acquisition, as has businessman Frank McCourt’s Project Liberty internet advocacy group, CNBC reported in January.

Trump told CNBC in an interview last year that he believed the platform was a national security threat, although the White House started a TikTok account in August.

White House launches TikTok account

Continue Reading

Technology

Why is Sam Altman losing sleep? OpenAI CEO addresses controversies in sweeping interview

Published

on

By

Why is Sam Altman losing sleep? OpenAI CEO addresses controversies in sweeping interview

Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model.  

“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman told former Fox News host Tucker Carlson in a nearly hour-long interview. 

“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, though he admitted “maybe we will get those wrong too.” 

Rather, he said he loses the most sleep over the “very small decisions” on model behavior, which can ultimately have big repercussions.

These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn’t answer. Here’s an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.

How does ChatGPT address suicide?

According to Altman, the most difficult issue the company is grappling with recently is how ChatGPT approaches suicide, in light of a lawsuit from a family who blamed the chatbot for their teenage son’s suicide.

The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up.

“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.” 

Jay Edelson on OpenAI wrongful death lawsuit: We're putting OpenAI & Sam Altman on trial, not AI

Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”

Soon after, in a blog post titled “Helping people when they need it most,” OpenAI detailed plans to address ChatGPT’s shortcomings when handling “sensitive situations,” and said it would keep improving its technology to protect people who are at their most vulnerable. 

How are ChatGPT’s ethics determined?

Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards. 

While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won’t answer. 

“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.” 

When pressed on how certain model specifications are decided, Altman said the company had consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”

An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users.

“There are clear examples of where society has an interest that is in significant tension with user freedom,” Altman said, though he added the company “won’t get everything right, and also needs the input of the world” to help make these decisions.

How private is ChatGPT?

Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for “totalitarian control.”

In response, Altman said one piece of policy he has been pushing for in Washington is “AI privilege,” which refers to the idea that anything a user says to a chatbot should be completely confidential. 

“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.” 

OpenAI CEO Sam Altman on path to profitability: Willing to run at a loss to focus on growth

According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added.

“I think I feel optimistic that we can get the government to understand the importance of this,” he said. 

Will ChatGPT be used in military operations?

Just how powerful is OpenAI?

Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a “religion.”

In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in “a huge up leveling” of all people. 

“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.”

However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.

Continue Reading

Trending