Neuralink logo displayed on a phone screen, a silhouette of a paper in shape of a human face and a binary code displayed on a screen are seen in this multiple exposure illustration photo taken in Krakow, Poland on December 10, 2021.
Jakub Porzycki | Nurphoto | Getty Images
Neuralink, the neurotech startup co-founded by Elon Musk, announced Thursday it has received approval from the Food and Drug Administration to conduct its first in-human clinical study.
Neuralink is building a brain implant called the Link, which aims to help patients with severe paralysis control external technologies using only neural signals. This means patients with severe degenerative diseases like ALS could eventually regain their ability to communicate with loved ones by moving cursors and typing with their minds.
“This is the result of incredible work by the Neuralink team in close collaboration with the FDA and represents an important first step that will one day allow our technology to help many people,” the company wrote in a tweet.
The FDA and Neuralink did not immediately respond to CNBC’s request for comment. The extent of the approved trial is not known. Neuralink said in a tweet that patient recruitment for its clinical trial is not open yet.
Neuralink is part of the emerging brain-computer interface, or BCI, industry. A BCI is a system that deciphers brain signals and translates them into commands for external technologies. Neuralink is perhaps the best-known name in the space thanks to the high profile of Musk, who is also the CEO of Tesla, SpaceX and Twitter.
Scientists have been studying BCI technology for decades, and several companies have developed promising systems that they hope to bring to market. But receiving FDA approval for a commercial medical device is no small task — it requires companies to successfully conduct several extremely thorough rounds of testing and data safety collection.
No BCI company has managed to clinch the FDA’s final seal of approval. But by receiving the go-ahead for a study with human patients, Neuralink is one step closer to market.
Neuralink’s BCI will require patients to undergo invasive brain surgery. Its system centers around the Link, a small circular implant that processes and translates neural signals. The Link is connected to a series of thin, flexible threads inserted directly into the brain tissue where they detect neural signals.
Patients with Neuralink devices will learn to control it using the Neuralink app. Patients will then be able to control external mice and keyboards through a Bluetooth connection, according to the company’s website.
The FDA’s approval for an in-human study is a significant win for Neuralink after a series of recent hurdles at the company. In February, the U.S. Department of Transportation confirmed to CNBC that it had opened an investigation into Neuralink for allegedly packaging and transporting contaminated hardware in an unsafe manner. Reuters reported in March that the FDA had rejected Neuralink’s application for human trials, and reportedly outlined “dozens” of issues the company needed to address.
Neuralink has also come under fire from activist groups for its alleged treatment of animals. The Physician’s Committee for Responsible Medicine, which advocates against animal testing, has repeatedly called on Musk to release details about experiments on monkeys that had resulted in internal bleeding, paralysis, chronic infections, seizures, declining psychological health and death.
A representative for PCRM did not immediately respond to CNBC’s request for comment.
In addition to helping patients with paralysis, experts believe BCIs could someday help treat maladies like blindness and mental illness. Musk has expressed his intent for Neuralink to explore these future use cases, as well as potential applications for healthy people.
At a “show and tell” recruitment event late last year, Musk even claimed he plans to someday receive one of Neuralink’s implants himself.
“You could have a Neuralink device implanted right now and you wouldn’t even know,” Musk said at the time. “In fact, in one of these demos, I will.”
Masayoshi Son, chairman and chief executive officer of SoftBank Group Corp., speaks at the SoftBank World event in Tokyo, Japan, on Wednesday, July 16, 2025. Speaking via teleconference, Son and OpenAI chief Sam Altman argued that advancing artificial intelligence would lead to new jobs that are not yet imagined, and the advancement of robotics will help kickstart a “self-improvement” loop. Photographer: Kiyoshi Ota/Bloomberg via Getty Images
Bloomberg | Bloomberg | Getty Images
SoftBank Group founder Masayoshi Son on Monday downplayed the decision to offload the conglomerate’s entire Nvidia stake, saying he “was crying” over parting with the shares.
Speaking at a forum in Tokyo Monday, Son addressed SoftBank’s November disclosure that the firm had sold its holding in the American chip darling for $5.83 billion.
According to Son, SoftBank wouldn’t have made the move if it didn’t need to bankroll its next artificial intelligence investments, including a big bet on OpenAI and data center projects.
“I don’t want to sell a single share. I just had more need for money to invest in OpenAI and other projects, Son said during the FII Priority Asia forum. “I was crying to sell Nvidia shares.”
Son’s comments are consistent with what analysts and other Softbank executives said in November, describing the sale as part of broader efforts to bolster SoftBank Vision Fund’s AI war chest.
The Japanese giant could also “potentially” increase its investment in OpenAI depending on the performance of the ChatGPT maker and the valuation of further rounds, a person familiar with the matter previously told CNBC.
Earlier this year, Son said that SoftBank was “all in” on OpenAI and predicted the AI startup would one day become the most valuable company in the world.
So far, that bet has reaped some dividends, with SoftBank reporting last month that its second-quarter net profit more than doubled to 2.5 trillion yen ($16.6 billion), driven by valuation gains in its OpenAI holdings.
However, SoftBank’s massive AI bets come amid growing fears and jitters in markets about a potential AI bubble.
In his Monday talk, Son also pushed back against these concerns, arguing that those who talk about an AI bubble are “not smart enough.”
He predicted that “super [artificial] intelligence” and AI robots will generate at least 10% of global gross domestic product over the long term, which he said would outweigh trillions of dollars of investment into the technology.
The logo of an Apple Store is seen reflected on the glass exterior of a Samsung flagship store in Shanghai, China Monday, Oct. 20, 2025.
Wang Gang | Feature China | Future Publishing | Getty Images
The cost of your smartphone might rise, analysts are warning, as the AI boom clogs up supply chains and a recent change by Nvidia to its products could make it worse.
AI data centers, on which tech giants globally are spending hundreds of billions of dollars, require chips from suppliers, like Nvidia, which relies on many different components and companies to create its coveted graphics processing units.
But other companies like AMD, the hyperscalers like Google and Microsoft, and other component suppliers all rely on this supply chain.
Many parts of the supply chain can’t keep up with demand, and it’s slowing down components that are critical for some of the world’s most popular consumer electronics. Those components are seeing huge spikes in prices, threatening price rises for the end product and could even lead to shortages of some devices.
“We see the rapid increase in demand for AI in data centers driving bottlenecks in many areas,” Peter Hanbury, partner in the technology practice at Bain & Company, told CNBC.
Where is the supply chain clogged?
One of the starkest assessments came from Alibaba CEO Eddie Wu, CEO of Chinese tech giant Alibaba.
Wu, whose company is building its own AI infrastructure and designs its own chips, said last week that there are shortages across semiconductor manufacturers, memory chips and storage devices like hard drives.
“There is a situation of undersupply,” Wu said, adding that the “supply side is going to be a relatively large bottleneck.” He added this could last two to three years.
Bain and Co.’s Hanbury said there are shortages of hard disk drives, or HDDs, which store data. HDDs are used in the data center. These are preferred by hyperscalers,: big companies like Microsoft and Google. But, with HDDs at capacity, these firms have shifted to using solid-state drives, or SSDs, another type of storage device.
However, these SSDs are key components for consumer electronics.
The other big focus is on a type of chip under the umbrella of memory called dynamic random-access memory or DRAM. Nvidia’s chips use high-bandwidth memory which is a type of chip that stacks multiple DRAM semiconductors.
Memory prices have surged as a result of the huge demand and lack of supply. Counterpoint Research said it expects memory prices to rise 30% in the fourth quarter of this year and another 20% in early 2026. Even small imbalances in supply and demand can have major knock on effects on memory pricing. And because of the demand for HBM and GPUs, chipmakers are prioritizing these over other types of semiconductors.
“DRAM is certainly a bottleneck as AI investments continue to feed the imbalance between demand and supply with HBM for AI being prioritized by chipmakers,” MS Hwang, research director at Counterpoint Research, told CNBC.
“Imbalances of 1-2% can trigger sharp price increases and we’re seeing that figure hitting 3% levels at the moment – this is very significant.”
Why are there issues?
Building up capacity in various areas of the semiconductor supply chain can be capital-intensive. And it’s an industry that’s known to be risk-averse and did not add the capacity necessary to meet the projections provided by key industry players, Bain & Co.’s Hanbur said.
“The direct cause of the shortage is the rapid increase in demand for data center chips,” Hanbury said.
“Basically, the suppliers worried the market was too optimistic and they did not want to overbuild very expensive capacity so they did not build to the estimates provided by their customers. Now, the suppliers need to add capacity quickly but as we know, it takes 2-3 years to add semiconductor manufacturing fabs.”
Nvidia at the center
A lot of attention is on Nvidia given it dominates when it comes to the chips that are being put into AI data centers.
It is a huge customer of high bandwidth memory, for example. And its products are manufactured by TSMC which also has other major customers like Apple.
But analysts are focused on a change Nvidia has made to its products that has the potential to add major pressure to consumer electronics supply chains. The U.S. giant is increasingly shifting toward using a type of memory in its products called Low-Power Double Data Rate (LPDDR). This is seen as more power efficient than the previous Double Data Rate, or DDR memory.
The problem is, Nvidia is increasingly using the latest generation of LPDDR memory, which is also used by high-end consumer electronics makers such as Samsung and Apple.
Typically, the industry would just be dealing with demand for this product from a handful of big electronics players. But now Nvidia, which has huge scale, is entering the mix.
“We also see a bigger risk on the horizon is with advanced memory as Nvidia’s recent pivot to LPDDR means they’re a customer on the scale of a major smartphone maker — a seismic shift for the supply chain which can’t easily absorb this scale of demand,” Hwang from Counterpoint Research said.
How AI boom is impacting consumer electronics
Here’s the link between all of this.
From chip manufacturers like TSMC, Intel and Samsung, there is only so much capacity. If there is huge demand for certain types of chips, then these companies will prioritize those, especially from their larger customers. That can lead to shortages of other types of semiconductors elsewhere.
Memory chips, in particular DRAM which has seen prices shoot up, is of particular concern because it’s used in so many devices from smartphones to laptops. And this could lead to price rises in the world’s favorite electronics.
DRAM and storage represent around 10% to 25% of the bill of materials for a typical PC or smartphone, according to Hanbury of Bain & Co. A price increase of 20% to 30% in these components would increase the total bill of materials costs by 5% to 10%.
“In terms of timing, the impact will likely start shortly as component costs are already increasing and likely accelerate into next year,” Hanbury said.
On top of this, there is now demand from players involved in AI data centers like Nvidia, for components that would have typically been used for consumer devices such as LPDDR which adds more demand to a supply constrained market.
If electronics firms can’t get their hands on the components needed for their devices because they’re in short supply or going toward AI data centers, then there could be shortages of the world’s most popular gadgets.
“Beyond the rise in cost there’s a second issue and that’s the inability to secure enough components, which constrains the production of electronic devices,” Counterpoint Research’s Hwang said.
What are tech firms saying?
A number of electronics companies have warned about the impact they are seeing from all of this.
Xiaomi, the third-biggest smartphone vendor globally, said it expects that consumers will see “a sizeable rise in product retail prices,” according to a Reuters reported this month.
Jeff Clark, chief operating officer at Dell, this month said the price rises of components is “unprecedented.”
“We have not seen costs move at the rate that we’ve seen,” Clark said on an earnings call, adding that the pressure is seen across various types of memory chips and storage hard drives.
The unintended consequences
The AI infrastructure players are using similar chips to those being used in consumer electronics. These are often some of the more advanced semiconductors on the market.
But there are legacy chips which are manufactured by the same companies that the AI market is relying on. As these manufacturers shift attention to serving their AI customers, there could be unintended consequences for other industries.
“For example, many other markets depend on the same underlying semiconductor manufacturing capabilities as the data center market” including automobiles, industrials and aerospace and defense, which “will likely see some impact from these price increases as well,” Hanbury said.
Samsung Electronics’s Galaxy Z TriFold media day at Samsung Gangnam in Seoul, South Korea, on Dec. 2, 2025.
Anadolu | Anadolu | Getty Images
Samsung Electronics on Monday announced the launch of its first multi-folding smartphone as it races to keep pace with innovations from fast-moving rivals.
The long-anticipated “Galaxy Z TriFold” will go on sale in South Korea on Dec. 12, with launches to follow in other markets including China, Taiwan, Singapore, and the United Arab Emirates, the company said in a press release.
The phone will be available in the U.S. during the first quarter of 2026, with more details to be shared later, the South Korean tech giant added. The Galaxy Z Trifold will ship as a single model in black with 16GB of memory and 512GB of storage, priced at 3,594,000 South Korean won ($2,449).
With Apple’s expected entry into the foldable segment, Samsung is positioning this device as a multi-fold pilot to reinforce its technology leadership.”
Liz Lee
Associate Director at Counterpoint Research
The device uses two inward-folding hinges to open into a 10-inch display — a tad smaller than the 11th-generation iPad’s 11-inch display — with a 2160 x 1584 resolution.
When its screen panels are folded, the device is measures 12.9 millimeters (0.5 inches) thick — slightly more than the Galaxy Z Fold6 at 12.1 mm and the latest Galaxy Z Fold7 at 8.9 mm.
“Samsung’s first tri-fold model will ship in very limited volume, but scale is not the objective,” Liz Lee, associate director at Counterpoint Research, said in a statement shared with CNBC.
“With competitive dynamics set to shift materially in 2026, especially with Apple’s expected entry into the foldable segment, Samsung is positioning this device as a multi-fold pilot to reinforce its technology leadership.”
A Samsung Electronics Co. Galaxy Z TriFold smartphone on display during a media preview in Seoul, South Korea, on Tuesday, Dec. 2, 2025.
Bloomberg | Bloomberg | Getty Images
Lee added that Samsung’s latest product is meant to test durability, hinge design and software performance while gathering real-world user insights before wider commercialization.
The phone’s three foldable panels can also run three apps vertically side by side, and offer a desktop-like mode without a separate display.
The TriFold features Samsung’s largest battery capacity among its foldable models and supports super-fast charging that reaches 50% in 30 minutes.
TM Roh, who was recently appointed Samsung Electronics co-CEO and head of the Device eXperience division, said the Galaxy Z TriFold reflects years of work on foldable designs and aims to balance portability, performance and productivity in one device.
Samsung was an early innovator of folding smartphones, unveiling its first foldable device in 2019. While the market has remained relatively small, new competitors have continued to enter, including Chinese brands that have proven competitive in both price and dimension.
Visitors try out the Galaxy Z Trifold during Samsung Electronics’ Galaxy Z TriFold media day at Samsung Gangnam in Seoul, South Korea, on Dec. 2, 2025.
Anadolu | Anadolu | Getty Images
In September, telecommunications giant Huawei announced its second-generation trifold phone for the Chinese market, measuring 12.8 mm thick when folded.
This year has also seen Chinese brands like Honor launch foldable smartphones in international markets. Honor was spun off from Huawei in 2020 in a bid to avoid U.S. sanctions and tap international markets.
Like Samsung’s other recent foldables, the TriFold is rated IP48, meaning it is water-resistant up to 1.5 meters for up to 30 minutes but offers limited dust protection.