On Monday, Chinese artificial intelligence startup DeepSeek took over rival OpenAI’s coveted spot as the most-downloaded free app in the U.S. on Apple‘s App Store, dethroning ChatGPT for DeepSeek’s AI Assistant. Global tech stocks sold off and were on pace to wipe out billions in market cap.
Later on Monday, DeepSeek said it would temporarily limit user registrations “due to large-scale malicious attacks” on its services, though existing users will be able to log in as usual.
Tech leaders, analysts, investors and developers say that the hype — and ensuing fear of falling behind in the ever-changing AI hype cycle — may be warranted. Especially in the era of the generative AI arms race, where tech giants and startups alike are racing to ensure they don’t fall behind in a market predicted to top $1 trillion in revenue within a decade.
What is DeepSeek?
DeepSeek was founded in 2023 by Liang Wenfeng, co-founder of High-Flyer, a quantitative hedge fund focused on AI. The AI startup reportedly grew out of the hedge fund’s AI research unit in April 2023 to focus on large language models and reaching artificial general intelligence, or AGI — a branch of AI that equals or surpasses human intellect on a wide range of tasks, which OpenAI and its rivals say they’re fast pursuing. DeepSeek is still wholly owned by and funded by High-Flyer, according to analysts at Jefferies.
The buzz around DeepSeek began picking up steam earlier this month, when the startup released R1, its reasoning model that rivals OpenAI’s o1. It’s open-source, meaning that any AI developer can use it, and has rocketed to the top of app stores and industry leaderboards, with users praising its performance and reasoning capabilities.
Like other Chinese chatbots, it has its limitations when asked about certain topics: When asked about some of Chinese leader Xi Jinping’s policies, for instance, DeepSeek reportedly steers the user away from similar lines of questioning.
Another key part of the discussion: DeepSeek’s R1 was built despite the U.S. curbing chip exports to China three times in three years. Estimates differ on exactly how much DeepSeek’s R1 costs, or how many GPUs went into it. Jefferies analysts estimated that a recent version had a “training cost of only US$5.6m (assuming US$2/H800 hour rental cost). That is less than 10% of the cost of Meta‘s Llama.” But regardless of the specific numbers, reports agree that the model was developed at a fraction of the cost of rival models by OpenAI, Anthropic, Google and others.
As a result, the AI sector is awash with questions, including whether the industry’s increasing number of astronomical funding rounds and billion-dollar valuations is necessary — and whether a bubble is about to burst.
Read more CNBC reporting on AI
Shares of Nvidia fell nearly 16% on Monday, with chipmaker ASML down nearly 7%. The Nasdaq dropped more than 3%. Four tech giants — Meta, Microsoft, Apple and ASML are all set to report earnings this week.
Analysts at Raymond James detailed some of the questions plaguing the AI industry this month, writing, “What are the investment implications? What does it say about open sourced vs. proprietary models? Is throwing money at GPUs really a panacea? Are U.S. export restrictions working? What are the broader implications of [DeepSeek]? Well, they could be dire, or a non-event, but rest assured, the industry is abuzz with disbelief and speculation.”
Bernstein analysts wrote in a note Monday that “according to the many (occasionally hysterical) hot takes we saw [over the weekend,] the implications range anywhere from ‘That’s really interesting’ to ‘This is the death-knell of the AI infrastructure complex as we know it.'”
How U.S. companies are responding
Some American tech CEOs are clambering to respond before clients switch to potentially cheaper offerings from DeepSeek, with Meta reportedly starting four DeepSeek-related “war rooms” within its generative AI department.
Microsoft CEO Satya Nadella wrote on X that the DeepSeek phenomenon was just an example of the Jevons paradox, writing, “As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.” OpenAI CEO Sam Altman tweeted a quote he attributed to Napoleon, writing, “A revolution can be neither made nor stopped. The only thing that can be done is for one of several of its children to give it a direction by dint of victories.”
Yann LeCun, Meta’s chief AI scientist, wrote on LinkedIn that DeepSeek’s success is indicative of changing tides in the AI sector to favor open-source technology.
LeCun wrote that DeepSeek has profited from some of Meta’s own technology, i.e., its Llama models, and that the startup “came up with new ideas and built them on top of other people’s work. Because their work is published and open source, everyone can profit from it. That is the power of open research and open source.”
Alexandr Wang, CEO of Scale AI, told CNBC last week that DeepSeek’s last AI model was “earth-shattering” and that its R1 release is even more powerful.
“What we’ve found is that DeepSeek … is the top performing, or roughly on par with the best American models,” Wang said, adding that the AI race between the U.S. and China is an “AI war.” Wang’s company provides training data to key AI players including OpenAI, Google and Meta.
Earlier this week, President Donald Trump announced a joint venture with OpenAI, Oracle and SoftBank to invest billions of dollars in U.S. AI infrastructure. The project, Stargate, was unveiled at the White House by Trump, SoftBank CEO Masayoshi Son, Oracle co-founder Larry Ellison and OpenAI CEO Sam Altman. Key initial technology partners will include Microsoft, Nvidia and Oracle, as well as semiconductor company Arm. They said they would invest $100 billion to start and up to $500 billion over the next four years.
AI evolving
News of DeepSeek’s prowess also comes amid the growing hype around AI agents — models that go beyond chatbots to complete multistep complex tasks for a user — which tech giants and startups alike are chasing. Meta, Google, Amazon, Microsoft, OpenAI and Anthropic have all expressed their goal of building agentic AI.
Anthropic, the Amazon-backed AI startup founded by ex-OpenAI research executives, ramped up its technology development throughout the past year, and in October, the startup said that its AI agents were able to use computers like humans to complete complex tasks. Anthropic’s Computer Use capability allows its technology to interpret what’s on a computer screen, select buttons, enter text, navigate websites and execute tasks through any software and real-time internet browsing, the startup said.
The tool can “use computers in basically the same way that we do,” Jared Kaplan, Anthropic’s chief science officer, told CNBC in an interview at the time. He said it can do tasks with “tens or even hundreds of steps.”
OpenAI released a similar tool last week, introducing a feature called Operator that will automate tasks such as planning vacations, filling out forms, making restaurant reservations and ordering groceries.
The Microsoft-backed startup describes it as “an agent that can go to the web to perform tasks for you,” and added that it is trained to interact with “the buttons, menus, and text fields that people use daily” on the web. It can also ask follow-up questions to further personalize the tasks it completes, such as login information for other websites. Users can take control of the screen at any time.
Shares of the search giant jumped more than 4% on Monday, pushing the company into territory occupied only by Nvidia, Microsoft and Apple.
The stock got a big lift in early September from an antitrust ruling by a judge, whose penalties came in lighter than shareholders feared. The U.S. Department of Justice wanted Google to be forced to divest its Chrome browser, and last year a district court ruled that the company held an illegal monopoly in search and related advertising.
But Judge Amit Mehta decided against the most severe consequences proposed by the DOJ, which sent shares soaring to a record. After the big rally, President Donald Trump congratulated the company and called it “a very good day.”
Read more CNBC tech news
Alphabet shares are now up more than 30% this year, compared to the 15% gain for the Nasdaq.
The $3 trillion milestone comes roughly 20 years after Google’s IPO and a little more than 10 years after the creation of Alphabet as a holding company, with Google its prime subsidiary.
CEO Sundar Pichai was named CEO of Alphabet in 2019, replacing co-founder Larry Page. Pichai’s latest challenge has been the surge of new competition due to the rise of artificial intelligence, which the company has had to manage through while also fending off an aggressive set of regulators in the U.S. and Europe.
The rise of Perplexity and OpenAI ended up helping Google land the recent favorable antitrust ruling. The company’s hopes of becoming a major AI player largely ride with Gemini, Google’s flagship suite of AI models.
The U.S. and China have reached a ‘framework’ deal for social media platform TikTok, Treasury Secretary Scott Bessent said Monday.
“It’s between two private parties, but the commercial terms have been agreed upon,” he said from U.S.-China talks in Madrid.
Both President Donald Trump and Chinese President Xi Jinping will meet Friday to discuss the terms. Trump also said in a Truth Social post Monday that a deal was reached “on a ‘certain’ company that young people in our Country very much wanted to save.”
Bessent indicated that the framework could pivot the platform to U.S.-controlled ownership.
TikTok did not immediately respond to a request for comment.
The comments came during the latest round of trade discussions between the U.S. and China. Relations have soured between the two countries in recent months from Trump’s tariffs and other trade restrictions.
At the same time, TikTok parent company ByteDance faces a Sept. 17 deadline to divest the platform’s U.S. business or face being shut down in the country.
U.S. Trade Representative Jamieson Greer said Monday that the deadline may need to be pushed back to get the deal signed, but there won’t be ongoing extensions.
Read more CNBC tech news
Congress passed a law last year prohibiting app store operators like Apple and Google from distributing TikTok in the U.S. due to its “foreign adversary-controlled application” status.
But Trump postponed the shutdown in January, signing an executive order in January that gave ByteDance 75 more days to make a deal. Further extensions came by way of executive orders in April and in June.
Commerce Secretary Howard Lutnicksaid in July that TikTok would shutter for Americans if China doesn’t give the U.S. more autonomy over the popular short-form video app.
As for who controls the platform, Trump told Fox News in June that he had a group of “very wealthy people” ready to buy the app and could reveal their identities in two weeks. The reveal never came.
He has previously said he’d be open to Oracle Chairman Larry Ellison or Tesla CEO Elon Musk buying TikTok in the U.S. Artificial intelligence startup Perplexity has submitted a bid for an acquisition, as has businessman Frank McCourt’s Project Liberty internet advocacy group, CNBC reported in January.
Trump told CNBC in an interview last year that he believed the platform was a national security threat, although the White House started a TikTok account in August.
Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.
Tom Williams | CQ-Roll Call, Inc. | Getty Images
In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model.
“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman told former Fox News host Tucker Carlson in a nearly hour-long interview.
“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, though he admitted “maybe we will get those wrong too.”
Rather, he said he loses the most sleep over the “very small decisions” on model behavior, which can ultimately have big repercussions.
These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn’t answer. Here’s an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.
The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have beentalking to ChatGPT in the lead-up.
“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.”
Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”
Soon after, in a blog post titled “Helping people when they need it most,” OpenAI detailed plans to address ChatGPT’s shortcomings when handling “sensitive situations,” and said it would keep improving its technology to protect people who are at their most vulnerable.
How are ChatGPT’s ethics determined?
Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards.
While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won’t answer.
“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.”
When pressed on how certain model specifications are decided, Altman said the company had consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”
An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users.
“There are clear examples of where society has an interest that is in significant tension with user freedom,” Altman said, though he added the company “won’t get everything right, and also needs the input of the world” to help make these decisions.
How private is ChatGPT?
Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for “totalitarian control.”
In response, Altman said one piece of policy he has been pushing for in Washington is “AI privilege,” which refers to the idea that anything a user says to a chatbot should be completely confidential.
“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.”
According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added.
“I think I feel optimistic that we can get the government to understand the importance of this,” he said.
Will ChatGPT be used in military operations?
Asked by Carlson if ChatGPT would be used by the military to harm humans, Altman didn’t provide a direct answer.
“I don’t know the way that people in the military use ChatGPT today… but I suspect there’s a lot of people in the military talking to ChatGPT for advice.”
Later, he added that he wasn’t sure “exactly how to feel about that.”
OpenAI was one of the AI companies that received a $200 million contract from the U.S. Department of Defense to put generative AI to work for the U.S. military. The firm said in a blog post that it would provide the U.S. government access to custom AI models for national security, support and product roadmap information.
Just how powerful is OpenAI?
Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a “religion.”
In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in “a huge up leveling” of all people.
“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.”
However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.