The logo of an Apple Store is seen reflected on the glass exterior of a Samsung flagship store in Shanghai, China Monday, Oct. 20, 2025.
Wang Gang | Feature China | Future Publishing | Getty Images
The cost of your smartphone might rise, analysts are warning, as the AI boom clogs up supply chains and a recent change by Nvidia to its products could make it worse.
AI data centers, on which tech giants globally are spending hundreds of billions of dollars, require chips from suppliers, like Nvidia, which relies on many different components and companies to create its coveted graphics processing units.
But other companies like AMD, the hyperscalers like Google and Microsoft, and other component suppliers all rely on this supply chain.
Many parts of the supply chain can’t keep up with demand, and it’s slowing down components that are critical for some of the world’s most popular consumer electronics. Those components are seeing huge spikes in prices, threatening price rises for the end product and could even lead to shortages of some devices.
“We see the rapid increase in demand for AI in data centers driving bottlenecks in many areas,” Peter Hanbury, partner in the technology practice at Bain & Company, told CNBC.
Where is the supply chain clogged?
One of the starkest assessments came from Alibaba CEO Eddie Wu, CEO of Chinese tech giant Alibaba.
Wu, whose company is building its own AI infrastructure and designs its own chips, said last week that there are shortages across semiconductor manufacturers, memory chips and storage devices like hard drives.
“There is a situation of undersupply,” Wu said, adding that the “supply side is going to be a relatively large bottleneck.” He added this could last two to three years.
Bain and Co.’s Hanbury said there are shortages of hard disk drives, or HDDs, which store data. HDDs are used in the data center. These are preferred by hyperscalers,: big companies like Microsoft and Google. But, with HDDs at capacity, these firms have shifted to using solid-state drives, or SSDs, another type of storage device.
However, these SSDs are key components for consumer electronics.
The other big focus is on a type of chip under the umbrella of memory called dynamic random-access memory or DRAM. Nvidia’s chips use high-bandwidth memory which is a type of chip that stacks multiple DRAM semiconductors.
Memory prices have surged as a result of the huge demand and lack of supply. Counterpoint Research said it expects memory prices to rise 30% in the fourth quarter of this year and another 20% in early 2026. Even small imbalances in supply and demand can have major knock on effects on memory pricing. And because of the demand for HBM and GPUs, chipmakers are prioritizing these over other types of semiconductors.
“DRAM is certainly a bottleneck as AI investments continue to feed the imbalance between demand and supply with HBM for AI being prioritized by chipmakers,” MS Hwang, research director at Counterpoint Research, told CNBC.
“Imbalances of 1-2% can trigger sharp price increases and we’re seeing that figure hitting 3% levels at the moment – this is very significant.”
Why are there issues?
Building up capacity in various areas of the semiconductor supply chain can be capital-intensive. And it’s an industry that’s known to be risk-averse and did not add the capacity necessary to meet the projections provided by key industry players, Bain & Co.’s Hanbur said.
“The direct cause of the shortage is the rapid increase in demand for data center chips,” Hanbury said.
“Basically, the suppliers worried the market was too optimistic and they did not want to overbuild very expensive capacity so they did not build to the estimates provided by their customers. Now, the suppliers need to add capacity quickly but as we know, it takes 2-3 years to add semiconductor manufacturing fabs.”
Nvidia at the center
A lot of attention is on Nvidia given it dominates when it comes to the chips that are being put into AI data centers.
It is a huge customer of high bandwidth memory, for example. And its products are manufactured by TSMC which also has other major customers like Apple.
But analysts are focused on a change Nvidia has made to its products that has the potential to add major pressure to consumer electronics supply chains. The U.S. giant is increasingly shifting toward using a type of memory in its products called Low-Power Double Data Rate (LPDDR). This is seen as more power efficient than the previous Double Data Rate, or DDR memory.
The problem is, Nvidia is increasingly using the latest generation of LPDDR memory, which is also used by high-end consumer electronics makers such as Samsung and Apple.
Typically, the industry would just be dealing with demand for this product from a handful of big electronics players. But now Nvidia, which has huge scale, is entering the mix.
“We also see a bigger risk on the horizon is with advanced memory as Nvidia’s recent pivot to LPDDR means they’re a customer on the scale of a major smartphone maker — a seismic shift for the supply chain which can’t easily absorb this scale of demand,” Hwang from Counterpoint Research said.
How AI boom is impacting consumer electronics
Here’s the link between all of this.
From chip manufacturers like TSMC, Intel and Samsung, there is only so much capacity. If there is huge demand for certain types of chips, then these companies will prioritize those, especially from their larger customers. That can lead to shortages of other types of semiconductors elsewhere.
Memory chips, in particular DRAM which has seen prices shoot up, is of particular concern because it’s used in so many devices from smartphones to laptops. And this could lead to price rises in the world’s favorite electronics.
DRAM and storage represent around 10% to 25% of the bill of materials for a typical PC or smartphone, according to Hanbury of Bain & Co. A price increase of 20% to 30% in these components would increase the total bill of materials costs by 5% to 10%.
“In terms of timing, the impact will likely start shortly as component costs are already increasing and likely accelerate into next year,” Hanbury said.
On top of this, there is now demand from players involved in AI data centers like Nvidia, for components that would have typically been used for consumer devices such as LPDDR which adds more demand to a supply constrained market.
If electronics firms can’t get their hands on the components needed for their devices because they’re in short supply or going toward AI data centers, then there could be shortages of the world’s most popular gadgets.
“Beyond the rise in cost there’s a second issue and that’s the inability to secure enough components, which constrains the production of electronic devices,” Counterpoint Research’s Hwang said.
What are tech firms saying?
A number of electronics companies have warned about the impact they are seeing from all of this.
Xiaomi, the third-biggest smartphone vendor globally, said it expects that consumers will see “a sizeable rise in product retail prices,” according to a Reuters reported this month.
Jeff Clark, chief operating officer at Dell, this month said the price rises of components is “unprecedented.”
“We have not seen costs move at the rate that we’ve seen,” Clark said on an earnings call, adding that the pressure is seen across various types of memory chips and storage hard drives.
The unintended consequences
The AI infrastructure players are using similar chips to those being used in consumer electronics. These are often some of the more advanced semiconductors on the market.
But there are legacy chips which are manufactured by the same companies that the AI market is relying on. As these manufacturers shift attention to serving their AI customers, there could be unintended consequences for other industries.
“For example, many other markets depend on the same underlying semiconductor manufacturing capabilities as the data center market” including automobiles, industrials and aerospace and defense, which “will likely see some impact from these price increases as well,” Hanbury said.
Beta Technologies shares surged more than 9% after air taxi maker Eve Air Mobility announced an up to $1 billion deal to buy motors from the Vermont-based company.
Eve, which was started by Brazilian airplane maker Embraer and is now under Eve Holding, said the manufacturing deal could equal as much as $1 billion over 10 years. The Florida-based company said it has a backlog of 2,800 vehicles.
Shares of Eve Holding gained 14%.
Eve CEO Johann Bordais called the deal a “pivotal milestone” in the advancement of the company’s electric vertical takeoff and landing, or eVTOL, technology.
“Their electric motor technology will play a critical role in powering our aircraft during cruise, supporting the maturity of our propulsion architecture as we progress toward entry into service,” he said in a release.
Amazon’s cloud unit on Tuesday announced AI-enabled software designed to help clients better understand and recover from outages.
DevOps Agent, as the artificial intelligence tool from Amazon Web Services is called, predicts the cause of technical hiccups using input from third-party tools such as Datadog and Dynatrace. AWS said customers can sign up to use the tool Tuesday in a preview, before Amazon starts charging for the service.
The AI outage tool from AWS is intended to help companies more quickly figure out what caused an outage and implement fixes, Swami Sivasubramanian, vice president of agentic AI at AWS, told CNBC. It’s what site reliability engineers, or SREs, do at many companies that provide online services.
SREs try to prevent downtime and jump into action during live incidents. Startups such as Resolve and Traversal have started marketing AI assistants for these experts. Microsoft’s Azure cloud group introduced an SRE Agent in May.
Rather than waiting for on-call staff members to figure out what happened, the AWS DevOps Agent automatically assigns work to agents that look into different hypotheses, Sivasubramanian said.
“By the time the on-call ops team member dials in, they have an incident report with preliminary investigation of what could be the likely outcome, and then suggest what could be the remediation as well,” Sivasubramanian told CNBC ahead of AWS’ Reinvent conference in Las Vegas this week.
Commonwealth Bank of Australia has tested the AWS DevOps Agent. In under 15 minutes, the software found the root cause of an issue that would have taken a veteran engineer hours, AWS said in a statement.
The tool relies on Amazon’s in-house AI models and those from other providers, a spokesperson said.
AWS has been selling software in addition to raw infrastructure for many years. Amazon was early to start renting out server space and storage to developers since the mid-2000s, and technology companies such as Google, Microsoft and Oracle have followed.
Since the launch of ChatGPT in 2022, these cloud infrastructure providers have been trying to demonstrate how generative AI models, which are often training in large cloud computing data centers, can speed up work for software developers.
Over the summer, Amazon announced Kiro, a so-called vibe coding tool that produces and modifies source code based on user text prompts. In November, Google debuted similar software for individual software developers called Antigravity, and Microsoft sells subscriptions to GitHub Copilot.
Attendees pass an Amazon Web Services logo during AWS re:Invent 2024, a conference hosted by Amazon Web Services, at The Venetian hotel in Las Vegas on Dec. 3, 2024.
Noah Berger | Getty Images
Amazon has found a way to let cloud clients extensively customize generative AI models. The catch is that the system costs $100,000 per year.
The Nova Forge offering from Amazon Web Services gives organizations access to Amazon’s AI models in various stages of training so they can incorporate their own data earlier in the process.
Already, companies can fine-tune large language models after they’ve been trained. The results with Nova Forge will lean more heavily on the data that customers supply. Nova Forge customers will also have the option to refine open-weight models, but training data and computing infrastructure are not included.
Organizations that assemble their own models might end up spending hundreds of millions or billions of dollars, which means using Nova Forge is more affordable, Amazon said.
AWS released its own models under the Nova brand in 2024, but they aren’t the first choice for most software developers. A July survey from Menlo Ventures said that by the middle of this year, Amazon-backed Anthropic controlled 32% of the market for enterprise LLMs, followed by OpenAI with 25%, Google with 20% and Meta with 9% — Amazon Nova had a less than 5% share, a Menlo spokesperson said.
The Nova models are available through AWS’ Bedrock service for running models on Amazon cloud infrastructure, as are Anthropic’s Claude 4.5 models.
“We are a frontier lab that has focused on customers,” Rohit Prasad, Amazon head scientist for artificial general intelligence, told CNBC in an interview. “Our customers wanted it. We have invented on their behalf to make this happen.”
Nova Forge is also in use by internal Amazon customers, including teams that work on the company’s stores and the Alexa AI assistant, Prasad said.
Reddit needed an AI model for moderating content that would be sophisticated about the many subjects people discuss on the social network. Engineers found that a Nova model enhanced with Reddit data through Forge performed better than commercially available large-scale models, Prasad said. Booking.com, Nimbus Therapeutics, the Nomura Research Institute and Sony are also building models with Forge, Amazon said.
Organizations can request that Amazon engineers help them build their Forge models, but that assistance is not included in the new service’s $100,000 annual fee.
AWS is also introducing new models for developers at its Reinvent conference in Las Vegas this week.
Nova 2 Pro is a reasoning model whose tests show it performs at least as well as Anthropic’s Claude Sonnet 4.5, OpenAI’s GPT-5 and GPT-5.1, and Google’s Gemini 3.0 Pro Preview, Amazon said. Reasoning involves running a series of computations that might take extra time in response to requests to produce better answers. Nova 2 Pro will be available in early access to AWS customers with Forge subscriptions, Prasad said. That means Forge customers and Amazon engineers will be able to try Nova 2 Pro at the same time.
Nova 2 Omni is another reasoning model that can process incoming images, speech, text and videos, and it generates images and text. It’s the first reasoning model with that range of capability, Amazon said. Amazon hopes that, by delivering a multifaceted model, it can lower the cost and complexity of incorporating AI models into applications.
Tens of thousands of organizations are using Nova models each week, Prasad said. AWS has said it has millions of customers. Nova is the second-most popular family of models in Bedrock, Prasad said. The top group of models are from Anthropic.