Baidu CEO Robin Li speaks during the company’s Create conference in Shenzhen, China, on April 16, 2024.
Bloomberg | Bloomberg | Getty Images
SHENZHEN, China – One year after Chinese search engine operator Baidu released its ChatGPT-like Ernie bot, the company this week announced tools to encourage locals to develop artificial intelligence applications.
“In China today, there are 1 billion internet users, strong foundation models, sufficient AI application scenarios and the most complete industrial system in the world,” CEO Robin Li said in his opening speech at Baidu’s annual AI developers conference on Tuesday.
“Everyone can be a developer,” he said in Mandarin, according to a CNBC translation.
While many point out how China lags behind the U.S. in artificial intelligence capabilities, others emphasize how the strength of the Chinese market lies more in technological application. Take next-day e-commerce and 30-minute food delivery, for example.
Baidu’s newly announced AI tools allow people with no coding knowledge to create generative AI-powered chatbots for specific functions, which can then be integrated in a website, Baidu search engine results or other online portals. That’s different from a similar tool called GPTs that OpenAI launched earlier this year, since those custom-built chatbots — for everything from suggesting movies to fixing code — sit within the ChatGPT interface.
The basic Baidu tools are generally available to try for free, up until a certain usage limit, similar to some of Google’s cloud and AI functions. OpenAI charges a monthly fee for the latest version of ChatGPT and the ability to use it for computer programs. The older ChatGPT 3.5 model is free to use, but without access to the custom-built GPTs.
Baidu this week also announced three new versions of its Ernie AI model — called “Speed,” “Lite” and “Tiny” — that coders can selectively access, based on the complexity of the task.
“It feels like their focus is on building the entire native AI development ecosystem, providing a full set of development tools and platform solutions,” said Bo Du, managing director at WestSummit Capital Management. That’s according to a CNBC translation of the Chinese remarks.
Baidu said this week that Ernie bot has accumulated more than 200 million users since its launch in March last year, and that computer programs are accessing the underlying AI model 200 million times a day. The company said more than 85,000 business clients have used its AI cloud platform to create 190,000 AI applications.
How the tech is being used
Many of the use cases Baidu showed off this week centered on consumer-facing applications: tourism and creation of content such as picture books and scheduling meetings.
In a demonstration hall, Baidu business departments showed off how the AI tools could be integrated with virtual people doing livestreams, or directing search engine traffic to an AI-based interactive buying guide.
Buysmart.AI, which won Baidu’s AI competition last year, uses the tech for an online shopping assistant connected to Chinese social media platform Weibo. The startup said it is using ChatGPT for a standalone interactive e-commerce app in the U.S.
“Personally I think that Ernie 4.0 has a better grasp of Chinese than ChatGPT 3.5,” Buysmart.AI co-founder Andy Qiu said in an interview. That’s according to a CNBC translation of his Mandarin-language remarks.
Consumers in the U.S. are currently more interested in AI products than users in China are, Qiu said. But he said that overall there is still room for improvement when it comes to building consumers’ trust of AI assistants and convincing users to place an order.
Also on display was a humanoid robot developed by Shenzhen-based UBTech Robotics that used Baidu’s Ernie AI model for understanding commands and reading written words.
It’s not immediately clear how such AI applications can significantly change business at this point. But Baidu is the latest to roll out more tools for people to experiment more easily and cheaply with.
Customer service, voice assistants and internet-connected devices can use smaller AI models to respond quickly to users, pointed out Helen Chai, managing director at CIC Consulting.
She added that in scenarios such as legal consultation or medical diagnosis, small AI models can be trained on specific data to achieve performance that’s comparable to larger AI models.
In the future, big AI-based applications will be based on a mixture of models, Baidu CEO Li said, using the technical term of “mixture of experts” or MoE.
He also promoted Baidu’s capabilities in AI-produced code, one of the areas in which Silicon Valley tech companies see the most potential for generative AI.
Baidu said since it deployed its “Comate” AI coding assistant a year ago, the tool has contributed to 27% of the tech company’s newly generated code. Audio streaming app Ximalaya, IT services company iSoftStone and Shanghai Mitsubishi Elevator are among more than 10,000 corporate Comate users, and have adopted nearly half of the code the tool generates, according to Baidu.
The global rush for developing generative AI has created a shortage in the semiconductors needed to provide the computing power. Chinese companies face added constraints due to U.S. restrictions on chip exports.
Baidu did not specifically discuss a shortage in computing power during the main conference session. In his speech, Dou Shen, head of AI cloud at Baidu, noted “uncertainties” in the chip supply chain and announced that Baidu has a platform that can access the power of several different kinds of chips.
Back in February, Li said on an earnings call that Baidu’s AI chip reserve “enables us to continue enhancing Ernie for the next one or two years.” The company is set to release first-quarter results on May 16.
A Waymo autonomous self-driving Jaguar electric vehicle sits parked at an EVgo charging station in Los Angeles, California, on May 15, 2024.
Patrick T. Fallon | AFP | Getty Images
Waymo said it will begin testing in Philadelphia, with a limited fleet of vehicles and human safety drivers behind the wheel.
“This city is a National Treasure,” Waymo wrote in a post on X on Monday. “It’s a city of love, where eagles fly with a gritty spirit and cheese that spreads and cheese that steaks. Our road trip continues to Philly next.”
The Alphabet-owned company confirmed to CNBC that it will be testing in Pennsylvania’s largest city through the fall, adding that the initial fleet of cars will be manually driven through the more complex parts of Philadelphia, including downtown and on freeways.
“Folks will see our vehicles driving at all hours throughout various neighborhoods, from North Central to Eastwick, and from University City to as far east as the Delaware River,” a Waymo spokesperson said.
With its so-called road trips, Waymo seeks to collect mapping data and evaluate how its autonomous technology, Waymo Driver, performs in new environments, handling traffic patterns and local infrastructure. Road trips are often used a way for the company to gauge whether it can potentially offer a paid ride share service in a particular location.
The expanded testing, which will go through the fall, comes as Waymo aims for a broader rollout. Last month, the company announced plans to drive vehicles manually in New York for testing, marking the first step toward potentially cracking the largest U.S. city. Waymo applied for a permit with the New York City Department of Transportation to operate autonomously with a trained specialist behind the wheel in Manhattan. State law currently doesn’t allow for such driverless operations.
Waymo One provides more than 250,000 paid trips each week across Phoenix, San Francisco, Los Angeles, and Austin, Texas, and is preparing to bring fully autonomous rides to Atlanta, Miami, and Washington, D.C., in 2026.
Alphabet has been under pressure to monetize artificial intelligence products as it bolsters spending on infrastructure. Alphabet’s “Other Bets” segment, which includes Waymo, brought in revenue of $1.65 billion in 2024, up from $1.53 billion in 2023. However, the segment lost $4.44 billion last year, compared to a loss of $4.09 billion the previous year.
White House trade advisor Peter Navarro chastised Apple CEO Tim Cook on Monday over the company’s response to pressure from the Trump administration to make more of its products outside of China.
“Going back to the first Trump term, Tim Cook has continually asked for more time in order to move his factories out of China,” Navarro said in an interview on CNBC’s “Squawk on the Street.” “I mean it’s the longest-running soap opera in Silicon Valley.”
CNBC has reached out to Apple for comment on Navarro’s criticism.
President Donald Trump has in recent months ramped up demands for Apple to move production of its iconic iPhone to the U.S. from overseas. Apple’s flagship phone is produced primarily in China, but the company has increasingly boosted production in India, partly to avoid the higher cost of Trump’s tariffs.
Trump in May warned Apple would have to pay a tariff of 25% or more for iPhones made outside the U.S. In separate remarks, Trump said he told Cook, “I don’t want you building in India.”
Read more CNBC tech news
Analysts and supply chain experts have argued it would be impossible for Apple to completely move iPhone production to the U.S. By some estimates, a U.S.-made iPhone could cost as much as $3,500.
Navarro said Cook isn’t shifting production out of China quickly enough.
“With all these new advanced manufacturing techniques and the way things are moving with AI and things like that, it’s inconceivable to me that Tim Cook could not produce his iPhones elsewhere around the world and in this country,” Navarro said.
Apple currently makes very few products in the U.S. During Trump’s first term, Apple extended its commitment to assemble the $3,000 Mac Pro in Texas.
In February, Apple said it would spend $500 billion within the U.S., including on assembling some AI servers.
CoreWeave founders Brian Venturo, at left in sweatshirt, and Mike Intrator slap five after ringing the opening bell at Nasdaq headquarters in New York on March 28, 2025.
Michael M. Santiago | Getty Images News | Getty Images
Artificial intelligence hyperscaler CoreWeave said Monday it will acquire Core Scientific, a leading data center infrastructure provider, in an all-stock deal valued at approximately $9 billion.
Coreweave stock fell about 4% on Monday while Core Scientific stock plummeted about 20%. Shares of both companies rallied at the end of June after the Wall Street Journal reported that talks were underway for an acquisition.
The deal strengthens CoreWeave’s position in the AI arms race by bringing critical infrastructure in-house.
CoreWeave CEO Michael Intrator said the move will eliminate $10 billion in future lease obligations and significantly enhance operating efficiency.
The transaction is expected to close in the fourth quarter of 2025, pending regulatory and shareholder approval.
Read more CNBC tech news
The deal expands CoreWeave’s access to power and real estate, giving it ownership of 1.3 gigawatts of gross capacity across Core Scientific’s U.S. data center footprint, with another gigawatt available for future growth.
Core Scientific has increasingly focused on high-performance compute workloads since emerging from bankruptcy and relisting on the Nasdaq in 2024.
Core Scientific shareholders will receive 0.1235 CoreWeave shares for each share they hold — implying a $20.40 per-share valuation and a 66% premium to Core Scientific’s closing stock price before deal talks were reported.
After closing, Core Scientific shareholders will own less than 10% of the combined company.