Alongside new iPhones and Apple Watches, Apple is releasing a new version of its AirPods Pro this month.
The 2nd Generation AirPods Pro with USB-C — a mouthful of a model name — don’t have any radical hardware changes. Apple replaced the proprietary Lightning port with a USB-C charger to match the rest of its lineup.
But a slew of software features launching alongside the new AirPods significantly change how noise-canceling on the wireless buds works in practice, and will make it much easier for AirPods Pro users to leave their earbuds in all day while navigating cities or talking to co-workers.
Apple has given the new features various names — Adaptive Audio, Conversation Awareness, Personalized Volume — but taken together, and using the default settings on a review unit of the new $249 AirPods, the upshot is that the device uses machine learning and artificial intelligence to turn down music when in a conversation or allow necessary nearby sounds into the headphones.
Instead of taking out your AirPods or turning off noise-canceling entirely when you’re navigating a treacherous street or having a conversation with a co-worker, users can now leave in their AirPods and rely on Apple’s software to intelligently decide what the user needs to hear.
Overall, the improvements are subtle but nice. They’re not a reason to upgrade AirPods if you have an older pair that’s working perfectly, but they are worth reaching for if you are getting new wireless headphones and know you don’t like to be constantly taking them in and out.
However, from a technological perspective, the new AirPods are exciting. Apple is using cutting-edge technology and its own customized chips to filter the world of sound through Apple’s hardware, and to augment or mute individual sounds to make your daily experience better, all powered by AI. Apple’s headphones are going far beyond the simple on-or-off noise-canceling features on competing devices.
The concept is not that far away from the “spatial computing” Apple introduced with the Vision Pro VR headset, which uses machine learning to integrate the real and computer worlds. Apple calls the AirPods a “wearable,” and reports it in the same revenue category as its Apple Watch. With its new adaptive features, the AirPods are more wearable than ever, and continue to be one of the company’s most intriguing product lines in terms of a look at the future of computing, even if they don’t get the same attention as the iPhone.
How it worked
While the adaptive technology isn’t quite seamless yet, it is a nice improvement over the blunter, muffling noise-cancellation setting that used to be the default on AirPods Pro. And it’s not only limited to the latest hardware — anyone with “second generation” AirPods Pro introduced last September can download software updates for their headphones and iPhone to enable them.
The new Adaptive mode ultimately blends chaotic street noise with the artificial quiet of active noise cancellation. Apple frames Adaptive Audio as a safety feature, so users don’t miss honks or disturbances when walking around cities. It’s subtle. You definitely feel like you’re still in a cocoon of quiet, but you don’t feel as if the whole world is muffled around you.
There’s a little chime when users turn it on, either through the Settings app when the earbuds are connected or through a shortcut by long-pressing the iPhone’s volume button in the Control Center.
Screenshot/CNBC
In practice, Adaptive Audio wasn’t perfect, but it’s an improvement over active noise canceling, which can be very isolating, and Apple’s transparency mode, which often amplifies extraneous noise (like the AirPods case clicking against car keys in my pocket). If I were to walk around cities, which I try to avoid for safety reasons, I would use Apple’s Adaptive mode.
But Bay Area BART station announcements made over a central speaker were still muffled, especially when I was listening to music, and that’s the sort of information I would like to hear. I still needed to turn off the headphones or take them out if I wanted to understand what they were saying, such as which train was coming into the station.
When walking in a dog park separated from a highway by a sound wall, Adaptive Audio let in more highway noise than active-cancellation mode, which wasn’t optimal. Later, when another person in the park was arguing about something and making a scene, I didn’t catch it by hearing it in Adaptive mode — I saw the dispute first. While many people use noise-canceling headphones to zone out those kind of disturbances, from a safety perspective, that’s something urban dwellers should be aware of in their vicinity.
Another key scenario for noise-canceling headphones is in the workplace, where workers who are headed back to the office are increasingly using them to try to simulate home office-like privacy or signal to co-workers they can’t talk.
It’s here where the Conversation Awareness feature will shine, allowing office grinders to hold quick conversations without taking out their AirPods. The feature effectively turns down your music or audio when it senses you’re taking part in a conversation. Instead of fumbling in settings to turn noise-canceling off or turn off the music, or taking the earbuds out of your ears, the software does it for you, and even amplifies the conversation a little bit.
When it works, it’s great. I had a couple conversations with my wife with the AirPods in and Conversation Awareness on. We spoke as if I didn’t have $250 of technology in my ears, and when I went back to doing what I was doing before, the volume of my music automatically went back to normal levels.
But there’s one big catch to Conversation Awareness — it doesn’t engage when someone talks to you, it only starts when you open your mouth and say something. So I found myself missing the first thing that was said in several conversations, such as when a neighbor greeted me, or what the cashier said when I approached my favorite taco truck.
At the taco truck, I found myself regretting not taking out the AirPods. I did feel like I missed a little bit of context in the short exchange, and felt rude for keeping in my headphones. I heard and understood the key bits, such as the total price, but I did not feel it was the same real-time conversation as if I was just speaking without headphones.
Also, Conversation Awareness did not turn down my music five minutes later when the cashier called out my order for pickup. Ultimately, my order was wrong too, probably because I was distracted. But it’s easy to see how people will use the feature to order a cold brew without pausing their music.
There are other little quirks, too. I like to sing along to music when I’m alone. With Conversation Awareness on, the music gets turned down, leaving you to hear your own flat singing. Once, when I was working at my computer, I laughed, and the AirPods algorithm thought I was trying to speak. I also never realized how much I mutter to myself when I’m writing.
Personalized Volume uses machine learning to adjust the overall audio level, taking into account your historical preferences — for me, louder than is healthy — and the exterior noise. I only noticed it once, when it turned down the volume after I had jacked it up.
Taking all this into account, the new AirPods features might not be a reason to rush out and get the latest model, but they clearly show that Apple’s headphones are evolving to become something more sophisticated than small speakers.
Elon Musk announced his new company xAI, which he says has the goal to understand the true nature of the universe.
Jaap Arriens | Nurphoto | Getty Images
XAI, the artificial intelligence startup run by Elon Musk, raised a combined $10 billion in debt and equity, Morgan Stanley said.
Half of that sum was clinched through secured notes and term loans, while a separate $5 billion was secured through strategic equity investment, the bank said on Monday.
The funding gives xAI more firepower to build out infrastructure and develop its Grok AI chatbot as it looks to compete with bitter rival OpenAI, as well as with a swathe of other players including Amazon-backed Anthropic.
In May, Musk told CNBC that xAI has already installed 200,000 graphics processing units (GPUs) at its Colossus facility in Memphis, Tennessee. Colossus is xAI’s supercomputer that trains the firm’s AI. Musk at the time said that his company will continue buying chips from semiconductor giants Nvidia and AMD and that xAI is planning a 1-million-GPU facility outside of Memphis.
Addressing the latest funds raised by the company, Morgan Stanley that “the proceeds will support xAI’s continued development of cutting-edge AI solutions, including one of the world’s largest data center and its flagship Grok platform.”
xAI continues to release updates to Grok and unveiled the Grok 3 AI model in February. Musk has sought to boost the use of Grok by integrating the AI model with the X social media platform, formerly known as Twitter. In March, xAI acquired X in a deal that valued the site at $33 billion and the AI firm at $80 billion. It’s unclear if the new equity raise has changed that valuation.
xAI was not immediately available for comment.
Last year, xAI raised $6 billion at a valuation of $50 billion, CNBC reported.
Morgan Stanley said the latest debt offering was “oversubscribed and included prominent global debt investors.”
Competition among American AI startups is intensifying, with companies raising huge amounts of funding to buy chips and build infrastructure.
Musk has called Grok a “maximally truth-seeking” AI that is also “anti-woke,” in a bid to set it apart from its rivals. But this has not come without its fair share of controversy. Earlier this year, Grok responded to user queries with unrelated comments about the controversial topic of “white genocide” and South Africa.
Musk has also clashed with fellow AI leaders, including OpenAI’s Sam Altman. Most famously, Musk claimed that OpenAI, which he co-founded, has deviated from its original mission of developing AI to benefit humanity as a nonprofit and is instead focused on commercial success. In February, Musk alongside a group of investors, put in a bid of $97.4 billion to buy control of OpenAI. Altman swiftly rejected the offer.
— CNBC’s Lora Kolodny and Jonathan Vanian contributed to this report.
In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.
Ramon Costa | SOPA Images | Lightrocket | Getty Images
Huawei has open-sourced two of its artificial intelligence models — a move tech experts say will help the U.S.-blacklisted firm continue to build its AI ecosystem and expand overseas.
The Chinese tech giant announced on Monday the open-sourcing of the AI models under its Pangu series, as well as some of its model reasoning technology.
Tech experts told CNBC that Huawei’s latest announcements not only highlight how it is solidifying itself as an open-source LLM player, but also how it is strengthening its position across the entire AI value chain as it works to overcome U.S.-led AI chip export restrictions.
In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.
In its announcement Monday, Huawei called the open-source moves another key measure for Huawei’s “Ascend ecosystem strategy” that would help speed up the adoption of AI across “thousands of industries.”
The Ascend ecosystem refers to AI products built around the company’s Ascend AI chip series, which are widely considered to be China’s leading competitor to products from American chip giant Nvidia. Nvidia is restricted from selling its advanced products to China.
A Google-like strategy?
Pangu being available in an open-source manner allows developers and businesses to test the models and customize them for their needs, said Lian Jye Su, chief analyst at Omdia. “The move is expected to incentivize the use of other Huawei products,” he added.
According to experts, the coupling of Huawei’s Pangu models with the company’s AI chips and related products gives the company a unique advantage, allowing it to optimize its AI solutions and applications.
While competitors like Baidu have LLMs with broad capabilities, Huawei has focused on specialized AI models for sectors such as government, finance and manufacturing.
“Huawei is not as strong as companies like DeepSeek and Baidu at the overall software level – but it doesn’t need to be,” said Marc Einstein, research director at Counterpoint Research.
“Its objective is to ultimately use open source products to drive hardware sales, which is a completely different model from others. It also collaborates with DeepSeek, Baidu and others and will continue to do so,” he added.
Ray Wang, principal analyst at Constellation Research, said the chip-to-model strategy is similar to that of Google, a company that is also developing AI chips and AI models like its open-source Gemma models.
Huawei’s announcement on Monday could also help with its international ambitions. Huawei, along with players like Zhipu AI, has been slowly making inroads into new overseas markets.
In its announcement Monday, Huawei invited developers, corporate partners and researchers around the world to download and use its new open-source products in order to gather feedback and improve them.
“Huawei’s open-source strategy will resonate well in developing countries where enterprises are more price-sensitive as is the case with [Huawei’s] other products,” Einstein said.
As part of its global strategy, the company has also been looking to bring its latest AI data center solutions to new countries.
Digital illustration of a glowing world map with “AI” text across multiple continents, representing the global presence and integration of artificial intelligence.
Fotograzia | Moment | Getty Images
As artificial intelligence becomes more democratized, it is important for emerging economies to build their own “sovereign AI,” panelists told CNBC’s East Tech West conference in Bangkok, Thailand, on Friday.
In general, sovereign AI refers to a nation’s ability to control its own AI technologies, data and related infrastructure, ensuring strategic autonomy while meeting its unique priorities and security needs.
However, this sovereignty has been lacking, according to panelist Kasima Tharnpipitchai, head of AI strategy at SCB 10X, the technology investment arm of Thailand-based SCBX Group. He noted that many of the world’s most prominent large language models, operated by companies such as Anthropic and OpenAI, are based on the English language.
“The way you think, the way you interact with the world, the way you are when you speak another language can be very different,” Tharnpipitchai said.
It is, therefore, important for countries to take ownership of their AI systems, developing technology for specific languages, cultures, and countries, rather than just translating over English-based models.
Panelists agreed that the digitally savvy ASEAN region, with a total population of nearly 700 million people, is particularly well positioned to build its sovereign AI. People under the age of 35 make up around 61% of the population, and about 125,000 new users gain access to the internet daily.
Given this context, Jeff Johnson, managing director of ASEAN at Amazon Web Services, said, “I think it’s really important, and we’re really focused on how we can really democratize access to cloud and AI.”
Open-source models
According to panelists, one key way that countries can build up their sovereign AI environments is through the use of open-source AI models.
“There is plenty of amazing talent here in Southeast Asia and in Thailand, especially. To have that captured in a way that isn’t publicly accessible or ecosystem developing would feel like a shame,” said SCB 10X’s Tharnpipitchai.
Doing open-source is a way to create a “collective energy” to help Thailand better compete in AI and push sovereignty in a way that is beneficial for the entire country, he added.
Open-source generally refers to software in which the source code is made freely available, allowing anyone to view, modify and redistribute it. LLM players, such as China’s DeepSeek and Meta’s Llama, advertise their models as open-source, albeit with some restrictions.
The emergence of more open-source models offers companies and governments more options compared to relying on a few closed models, according to Cecily Ng, vice president and general manager of ASEAN & Greater China at software vendor Databricks.
AI experts have previously told CNBC that open-source AI has helped China boost AI adoption, better develop its AI ecosystem and compete with the U.S.
Access to computing
Prem Pavan, vice president and general manager of Southeast Asia and Korea at Red Hat, said that the localization of AI had been focused on language until recently. Having sovereign access to AI models powered by local hardware and computing is more important today, he added.
Panelists said that for emerging countries like Thailand, AI localization can be offered by cloud computing companies with domestic operations. These include global hyperscalers such as AWS, Microsoft Azure and Tencent Cloud, and sovereign players like AIS Cloud and True IDC.
“We’re here in Thailand and across Southeast Asia to support all industries, all businesses of all shapes and sizes, from the smallest startup to the largest enterprise,” said AWS’s Johnson.
He added that the economic model of the company’s cloud services makes it easy to “pay for what you use,” thus lowering the barriers to entry and making it very easy to build models and applications.
In April, the U.N. Trade and Development Agency said in a report that AI was projected to reach $4.8 trillion in market value by 2033. However, it warned that the technology’s benefits remain highly concentrated, with nations at risk of lagging behind.
Among UNCTAD’s recommendations to the international community for driving inclusive growth was shared AI infrastructure, the use of open-source AI models and initiatives to share AI knowledge and resources.