Connect with us

Published

on

China is focusing on large language models (LLMs) in the artificial intelligence space. 

Blackdovfx | Istock | Getty Images

China’s attempts to dominate the world of artificial intelligence could be paying off, with industry insiders and technology analysts telling CNBC that Chinese AI models are already hugely popular and are keeping pace with — and even surpassing — those from the U.S. in terms of performance.

AI has become the latest battleground between the U.S. and China, with both sides considering it a strategic technology. Washington continues to restrict China’s access to leading-edge chips designed to help power artificial intelligence amid fears that the technology could threaten U.S. national security.

It’s led China to pursue its own approach to boosting the appeal and performance of its AI models, including relying on open-sourcing technology and developing its own super-fast software and chips.

China is creating popular LLMs

On Hugging Face, a repository of LLMs, Chinese LLMs are the most downloaded, according to Tiezhen Wang, a machine learning engineer at the company. Qwen, a family of AI models created by Chinese e-commerce giant Alibaba, is the most popular on Hugging Face, he said.

“Qwen is rapidly gaining popularity due to its outstanding performance on competitive benchmarks,” Wang told CNBC by email.

He added that Qwen has a “highly favorable licensing model” which means it can be used by companies without the need for “extensive legal reviews.”

Qwen comes in various sizes, or parameters, as they’re known in the world of LLMs. Large parameter models are more powerful but have higher computational costs, while smaller ones are cheaper to run.

“Regardless of the size you choose, Qwen is likely to be one of the best-performing models available right now,” Wang added.

DeepSeek, a start-up, also made waves recently with a model called DeepSeek-R1. DeepSeek said last month that its R1 model competes with OpenAI’s o1 — a model designed for reasoning or solving more complex tasks.

These companies claim that their models can compete with other open-source offerings like Meta‘s Llama, as well as closed LLMs such as those from OpenAI, across various functions.

“In the last year, we’ve seen the rise of open source Chinese contributions to AI with really strong performance, low cost to serve and high throughput,” Grace Isford, a partner at Lux Capital, told CNBC by email.

China pushes open source to go global

Open sourcing a technology serves a number of purposes, including driving innovation as more developers have access to it, as well as building a community around a product.

It is not only Chinese firms that have launched open-source LLMs. Facebook parent Meta, as well as European start-up Mistral, also have open-source versions of AI models.

But with the technology industry caught in the crosshairs of the geopolitical battle between Washington and Beijing, open-source LLMs give Chinese firms another advantage: enabling their models to be used globally.

“Chinese companies would like to see their models used outside of China, so this is definitively a way for companies to become global players in the AI space,” Paul Triolo, a partner at global advisory firm DGA Group, told CNBC by email.

While the focus is on AI models right now, there is also debate over what applications will be built on top of them — and who will dominate this global internet landscape going forward.

“If you assume these frontier base AI models are table stakes, it’s about what these models are used for, like accelerating frontier science and engineering technology,” Lux Capital’s Isford said.

Today’s AI models have been compared to operating systems, such as Microsoft’s Windows, Google‘s Android and Apple‘s iOS, with the potential to dominate a market, like these companies do on mobile and PCs.

If true, this makes the stakes for building a dominant LLM higher.

“They [Chinese companies] perceive LLMs as the center of future tech ecosystems,” Xin Sun, senior lecturer in Chinese and East Asian business at King’s College London, told CNBC by email.

“Their future business models will rely on developers joining their ecosystems, developing new applications based on the LLMs, and attracting users and data from which profits can be generated subsequently through various means, including but far beyond directing users to use their cloud services,” Sun added.

Chip restrictions cast doubt over China’s AI future

AI models are trained on vast amounts of data, requiring huge amounts of computing power. Currently, Nvidia is the leading designer of the chips required for this, known as graphics processing units (GPUs).

Most of the leading AI companies are training their systems on Nvidia’s most high-performance chips — but not in China.

Over the past year or so, the U.S. has ramped up export restrictions on advanced semiconductor and chipmaking equipment to China. It means Nvidia‘s leading-edge chips cannot be exported to the country and the company has had to create sanction-compliant semiconductors to export.

Despite, these curbs, however, Chinese firms have still managed to launch advanced AI models.

“Major Chinese technology platforms currently have sufficient access to computing power to continue to improve models. This is because they have stockpiled large numbers of Nvidia GPUs and are also leveraging domestic GPUs from Huawei and other firms,” DGA Group’s Triolo said.

Indeed, Chinese companies have been boosting efforts to create viable alternatives to Nvidia. Huawei has been one of the leading players in pursuit of this goal in China, while firms like Baidu and Alibaba have also been investing in semiconductor design.

“However, the gap in terms of advanced hardware compute will become greater over time, particularly next year as Nvidia rolls out its Blackwell-based systems that are restricted for export to China,” Triolo said.

Lux Capital’s Isford flagged that China has been “systematically investing and growing their whole domestic AI infrastructure stack outside of Nvidia with high-performance AI chips from companies like Baidu.”

“Whether or not Nvidia chips are banned in China will not prevent China from investing and building their own infrastructure to build and train AI models,” she added.

Continue Reading

Technology

Amazon extends Prime Day to four days, starting July 8

Published

on

By

Amazon extends Prime Day to four days, starting July 8

An Amazon worker moves boxes on Amazon Prime Day in the East Village of New York City, July 11, 2023.

Spencer Platt | Getty Images

Amazon is extending its Prime Day discount bonanza, announcing that the annual sale will run four days this year.

The 96-hour event will start at 12:01 a.m. PT on July 8, and continue through July 11, Amazon said in a release.

For the first time, the company will roll out themed “deal drops” that change daily and are available “while supplies last.” Amazon has in recent years toyed with adding more limited-run and invite-only deals during Prime Day events to create a feeling of urgency or scarcity.

Amazon launched Prime Day in 2015 as a way to secure new members for its $139-a-year loyalty program, and to promote its own products and services while providing a sales boost in the middle of the year. In 2019, the company made Prime Day a 48-hour event, and it’s since added a second Prime Day-like event in the fall.

Prime Day is also a significant revenue driver for other retailers, which often host competing discount events.

WATCH: How Amazon is using AI to revolutionize robotics

How Amazon is using AI to revolutionize robotics

Continue Reading

Technology

SK Hynix shares extend gains to over 2-decade highs as parent group reportedly plans AI data center

Published

on

By

SK Hynix shares extend gains to over 2-decade highs as parent group reportedly plans AI data center

Illustration of the SK Hynix company logo seen displayed on a smartphone screen.

Sopa Images | Lightrocket | Getty Images

Shares in South Korea’s SK Hynix extended gains to hit a more than 2-decade high on Tuesday, following reports over the weekend that SK Group plans to build the country’s largest AI data center.

SK Hynix shares, which have surged almost 50% so far this year on the back of an AI boom, were up nearly 3%, following gains on Monday. 

The company’s parent, SK Group, plans to build the AI data center in partnership with Amazon Web Services in Ulsan, according to domestic media. SK Telecom and SK Broadband are reportedly leading the initiative, with support from other affiliates, including SK Hynix. 

SK Hynix is a leading supplier of dynamic random access memory or DRAM — a type of semiconductor memory found in PCs, workstations and servers that is used to store data and program code.

The company’s DRAM rival, Samsung, was also trading up 4% on Tuesday. However, it’s growth has fallen behind that of SK Hynix.

On Friday, Samsung Electronics’ market cap reportedly slid to a 9-year low of 345.1 trillion won ($252 billion) as the chipmaker struggles to capitalize on AI-led demand. 

SK Hynix, on the other hand, has become a leader in high bandwidth memory — a type of DRAM used in artificial intelligence servers — supplying to clients such as AI behemoth Nvidia. 

A report from Counterpoint Research in April said that SK Hynix had captured 70% of the HBM market by revenue share in the first quarter.

This HBM strength helped it overtake Samsung in the overall DRAM market for the first time ever, with a 36% global market share as compared to Samsung’s 34%. 

Continue Reading

Technology

OpenAI wins $200 million U.S. defense contract

Published

on

By

OpenAI wins 0 million U.S. defense contract

OpenAI CEO Sam Altman speaks during the Snowflake Summit in San Francisco on June 2, 2025.

Justin Sullivan | Getty Images News | Getty Images

OpenAI has been awarded a $200 million contract to provide the U.S. Defense Department with artificial intelligence tools.

The department announced the one-year contract on Monday, months after OpenAI said it would collaborate with defense technology startup Anduril to deploy advanced AI systems for “national security missions.”

“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said. It’s the first contract with OpenAI listed on the Department of Defense’s website.

Anduril received a $100 million defense contract in December. Weeks earlier, OpenAI rival Anthropic said it would work with Palantir and Amazon to supply its AI models to U.S. defense and intelligence agencies.

Sam Altman, OpenAI’s co-founder and CEO, said in a discussion with OpenAI board member and former National Security Agency leader Paul Nakasone at a Vanderbilt University event in April that “we have to and are proud to and really want to engage in national security areas.”

OpenAI did not immediately respond to a request for comment.

The Defense Department specified that the contract is with OpenAI Public Sector LLC, and that the work will mostly occur in the National Capital Region, which encompasses Washington, D.C., and several nearby counties in Maryland and Virginia.

Meanwhile, OpenAI is working to build additional computing power in the U.S. In January, Altman appeared alongside President Donald Trump at the White House to announce the $500 billion Stargate project to build AI infrastructure in the U.S.

The new contract will represent a small portion of revenue at OpenAI, which is generating over $10 billion in annualized sales. In March, the company announced a $40 billion financing round at a $300 billion valuation.

In April, Microsoft, which supplies cloud infrastructure to OpenAI, said the U.S. Defense Information Systems Agency has authorized the use of the Azure OpenAI service with secret classified information. 

WATCH: OpenAI hits $10 billion in annual recurring revenue

OpenAI hits $10 billion in annual recurring revenue

Continue Reading

Trending