Connect with us

Published

on

Intel CEO Pat Gelsinger speaks while showing silicon wafers during an event called AI Everywhere in New York, Thursday, Dec. 14, 2023.

Seth Wenig | AP

Intel’s long-awaited turnaround looks farther away than ever after the company reported dismal first-quarter earnings. Investors pushed the stock down 10% on Friday to their lowest level of the year.

Although Intel’s revenue is no longer shrinking and the company remains the biggest maker of processors that power PCs and laptops, sales in the first quarter trailed estimates. Intel also gave a soft forecast for the second quarter, suggesting weak demand.

It was a tough showing for CEO Pat Gelsinger, who’s early in his fourth year at the helm.

But Intel’s problems are decades in the making.

Before Gelsinger returned to the company in 2021, the company, once synonymous with “Silicon Valley,” had lost its edge in semiconductor manufacturing to overseas rivals like Taiwan Semiconductor Manufacturing Company. Now, in a high-risk quest, it’s spending billions per quarter to regain ground.

“Job number one was to accelerate our efforts to close the technology gap that was created by over a decade of under-investment,” Gelsinger told investors on Thursday. He said the company is still on track to catch up by 2026.

Investors remain skeptical. Intel is the worst-performing tech stock in the S&P 500 this year, down 37%. Meanwhile, the two best-performing stocks in the index are chipmaker Nvidia and Super Micro Computer, which has been boosted by surging demand for Nvidia-based AI servers.

Intel, long the most valuable U.S. chipmaker, is now one-sixteenth the size of Nvidia by market cap. It’s also smaller than Qualcomm, Broadcom, Texas Instruments, and AMD. For decades, it was the largest semiconductor company in the world by sales, but suffered seven straight quarters of revenue declines recently, and was passed by Nvidia last year.

Gelsinger is betting the company on a risky business model change. Not only will Intel make its own branded processors, but it will act as a factory for other chip companies that outsource their manufacturing — a group of companies that includes Nvidia, Apple, and Qualcomm. Its success acquiring customers will depend on Intel regaining “process leadership,” as the company calls it.

Other semiconductor companies would like an alternative to TSMC so they don’t have to rely on a single supplier. U.S. politicians including President Biden call Intel an American chip champion and say the company is strategically an important part of the U.S. processor supply chain.

“Intel is a big iconic semiconductor company which has been the leader for many years,” said Nicholas Brathwaite, managing partner at Celesta Capital, which invests in semiconductor companies. “And I think it’s a company that is worth trying to save, and they have to come back to competitiveness.”

But the company isn’t doing itself any favors.

“I think everyone has been hearing them say the next quarter will be better for two, three years now,” said Counterpoint analyst Akshara Bassi.

Intel has fumbled the ball for years. It missed the mobile chip boom with the unveiling of the iPhone in 2007. It’s been largely on the sidelines of the artificial intelligence craze while companies like Meta, Microsoft and Google order as many Nvidia chips as they can.

Here’s how Intel ended up where it is today.

Missed out on the iPhone

The late Apple CEO Steve Jobs unveiling the first iPhone in 2007.

David Paul Morris | Getty Images News | Getty Images

The iPhone could have had an Intel chip inside. When Apple developed the first iPhone, then-CEO Steve Jobs visited former Intel CEO Paul Otellini, according to Walter Isaacson’s 2011 biography “Steve Jobs.”

They discussed whether Intel should power the iPhone, which had not been released yet, Jobs and Otellini told Isaacson. When the iPhone was first revealed, it was marketed as a phone that ran the Apple Mac operating system. It would’ve made sense to use Intel chips, which ran on the best desktops at the time, including Apple’s Macs.

Jobs said that Apple passed on Intel’s chips because the company was “slow” and Apple didn’t want the same chips to be sold to its competitors. Otellini said that while the tie-up would have made sense, the two companies couldn’t agree on a price or who owned the intellectual property, according to Isaacson.

The deal never happened. Apple chose Samsung chips when the iPhone launched in 2007. Apple bought PA Semi in 2008 and introduced its first homegrown iPhone chip in 2010.

Within five years, Apple started shipping hundreds of millions of iPhones. Overall smartphone shipments — including Android phones designed to compete with Apple — surpassed PC shipments in 2010.

Nearly every modern smartphone uses an Arm-based chip instead of Intel’s x86 technology which was created for PCs in 1981 and is still in use.

Arm chips built by Apple and Qualcomm consume less power than Intel’s processors, making them more desirable for small devices like phones that run on batteries.

Arm-based chips quickly improved due to the enormous manufacturing volumes and the demands of an industry that needs new chips every year with faster performance and fresh features. Apple started placing huge orders with TSMC to build its iPhone chips, starting with the A8 in 2014. It gave TSMC the cash to upgrade its manufacturing equipment annually and surpass Intel.

By the end of the decade, some benchmarks had the fastest phone processors rivaling Intel’s PC chips for some tasks while consuming far less power. Around 2017, mobile chips from Apple and Qualcomm started adding AI parts to their chips called neural processing units, another advancement over Intel’s PC processors. The first Intel-based laptop with an NPU shipped late last year.

Intel has since lost share in its core PC chip business to chips that grew out of the mobile revolution.

Apple stopped using Intel in its PCs in 2020. Macs now use Arm-based chips based on the ones used in iPhones. Some of the first mainstream Windows laptops with Arm-based chips are coming out later this year. Low-cost laptops running Google ChromeOS are increasingly using Arm, too.

“Intel lost a big chunk of their market share because of Apple, which is about 10% of the market,” Gartner analyst Mikako Kitagawa said.

Intel made efforts to break into smartphones. It released an x86-based mobile chip called Atom that was used in the 2012 Asus Zenphone. But it never sold well and the product line was dead by 2015.

Intel’s mobile stumble set the stage for a lost decade.

All about transistors

US President Joe Biden holds a wafer of chips as he tours the Intel Ocotillo Campus in Chandler, Arizona, on March 20, 2024.

Brendan Smialowski | AFP | Getty Images

Processors get faster with more transistors. Each one allows them to do more calculations. The original Intel microprocessor from 1971, the 4004, had about 2,000 transistors. Now Intel’s chips have billions.

Semiconductor companies fit more transistors on chips by shrinking them. The size of the transistor represents the “process node.” Smaller numbers are better.

The original 4004 used a 10-micrometer process. Now, TSMC’s best chips use a 3-nanometer process. Intel is currently at 7-nanometers. Nanometers are 1,000 times smaller than micrometers.

Engineers, especially at Intel, took pride in regularly delivering smaller transistors. Brathwaite, who worked at Intel in the 1980s, said Intel’s process engineers were the company’s “crown jewels.” People in the technology industry relied on “Moore’s Law,” coined by Intel co-founder Gordon Moore, that said the amount of computing power would double and become cheaper at predictable intervals, roughly every two years.

Moore’s Law meant that Intel’s software partners, like Microsoft, could count on the next generation of PCs or servers being more powerful than the current generation.

The expectation of continuous improvement at Intel was so strong that it even had a nickname: “tick-tock development.” Every two years, Intel would release a chip on a new process (tick) and in the subsequent year, it would refine its design and technology (tock.)

In 2015, under CEO Brian Krzanich, it became clear that Intel’s 10nm process was delayed, and that the company would continue shipping its most important PC and server processors using its 14nm process for longer than the normal two years. The “tick-tock” process had added an extra tock by the time the 14nm chips shipped in 2017. Intel officials today say that the issue was underinvestment, specifically on EUV lithography machines made by ASML, which TSMC enthusiastically embraced.

The delays compounded at Intel. The company missed its deadlines for the next process, 7nm — eventually revealing the issue in a bullet point in the small print in a 2020 earnings release, causing the stock to plunge, and clearing the way for Gelsinger, a former Intel engineer, to take over.

While Intel was struggling to keep its legendary pace, AMD, Intel’s historic rival for server and PC chips, took advantage.

AMD is a “fabless” chip designer. It designs its chips in California, and has TSMC or GlobalFoundries manufacture them. TSMC didn’t have the same issues with 10nm or 7nm, and that meant that AMD’s chips were competitive or better than Intel’s in the latter half of the decade, especially for certain tasks.

AMD, which barely had market share in server CPUs a decade ago, started taking its slice. AMD made over 20% of server CPUs sold in 2022, and shipments grew 62% that year, according to an estimate from CounterPoint Research last year. AMD surpassed Intel’s market cap the same year.

Missing on the AI boom

Nvidia founder and CEO Jensen Huang displays products on stage during the annual Nvidia GTC Conference at the SAP Center in San Jose, California, on March 18, 2024.

Josh Edelson | Afp | Getty Images

Graphics processor units, or GPUs, were originally designed to play sophisticated computer games. But computer scientists knew they were also ideal for running the kind of parallel calculations that AI algorithms require.

The broader business community caught on after OpenAI released ChatGPT in 2022, helping Nvidia triple sales over the past year. Companies are spending money on pricey servers again.

AI-oriented GPU-based servers sometimes pair as many as eight Nvidia GPUs to one Intel CPU. In older servers, the Intel CPU was almost always the most expensive and important part. In a GPU-based server, it’s Nvidia’s chips.

Nvidia recently announced a version of its latest “Blackwell” GPU that cuts Intel out entirely. Two Nvidia B100 GPUs are paired with one Arm-based processor.

Almost all Nvidia GPUs used for AI are made by TSMC in Taiwan, using leading-edge techniques to produce the most advanced chip.

Intel doesn’t have a GPU competitor to Nvidia’s AI accelerators, but it has an AI chip called Gaudi 3. Intel started focusing on AI for servers in 2018 when it bought Habana Labs, whose technology became the basis for the Gaudi chips. The chip is manufactured on a 5nm process, which Intel doesn’t have, so the company relies on an external foundry.

Intel says it expects $500 million in Gaudi 3 sales this year, mostly in the second half. For comparison, AMD expects about $2 billion in annual AI chip revenue. Meanwhile, analysts polled by FactSet expect Nvidia’s data center business — its AI GPUs — to account for $57 billion in sales during the second half of the year.

Still, Intel sees an opportunity and has recently been talking up a different AI story — it could eventually be the American producer of AI chips, maybe even for Nvidia.

The U.S. government is subsidizing a massive Intel fab outside of Columbus, Ohio, as part of $8.5 billion in loans and grants toward U.S. chipmaking. Gelsinger said last month that the plant will offer leading-edge manufacturing when it comes online in 2028, and will make AI chips — perhaps those of Intel’s rivals, Gelsinger said on a call with reporters in March.

Intel’s death march

US President Joe Biden (C) stands behind a table, next to Intel’s CEO Pat Gelsinger (L) as they look at wafers while touring the Intel Ocotillo Campus in Chandler, Arizona, on March 20, 2024.

Brendan Smialowski | AFP | Getty Images

Intel has faced its old failures since Gelsinger took the helm in 2021, and is actively trying to catch up to TSMC through a process that Intel calls “four nodes in five years.”

It hasn’t been easy. Gelsinger referred to its goal to regain leadership as a “death march” in 2022.

Now, the march is starting to reach its destination, and Intel said on Thursday that it’s still on track to catch up by 2026. At that point, TSMC will be shipping 2nm chips. Intel said it will begin producing its “18A” process, equivalent to 2nm, by 2025.

It hasn’t been cheap. Intel reported a $2.5 billion operating loss in its foundry division on $4.4 billion in mostly internal sales. The sums represent the vast investments Intel is making in facilities and tools to make more advanced chips.

“Setup costs are high and that’s why there’s so much cash burn,” said Bassi, the CounterPoint analyst. “Running a foundry is a capital-intensive business. That’s why most of the competitors are fabless, they are more than happy to outsource it to TSMC.”

Intel last month reported a $7 billion operating loss in its foundry in 2023.

“We have a lot of these investments to catch up flowing through the P&L,” Gelsinger told CNBC’s Jon Fortt on Thursday. “But basically, what we expect in ’24 is the trough.”

Not many companies have officially signed up to use Intel’s fabs. Microsoft has said it will use them to manufacture its server chips. Intel says it’s already booked $15 billion in contracts with external companies for the service.

Intel will help its own business and enable better performance in its products if it regains the lead in making the smallest transistors. If that happens, Intel will be back, as Gelsinger is fond of saying.

On Thursday, Gelsinger said demand was high for this year’s forthcoming server chips using Intel 3, or its 3nm process, and that it could win customers who had defected to competitors.

“We’re rebuilding customer trust,” Gelsinger said on Thursday. “They’re looking at us now saying ‘Oh, Intel is back.'”

WATCH: Intel’s traditional business hasn’t grown fast enough

Intel's traditional business hasn't grown fast enough to cover its manufacturing costs: Chris Caso

Continue Reading

Technology

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Published

on

By

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX. 

Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.

Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.

In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit. 

“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.

To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.

Continue Reading

Technology

Why X’s new terms of service are driving some users to leave Elon Musk’s platform

Published

on

By

Why X's new terms of service are driving some users to leave Elon Musk's platform

Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.

Carlos Barria | Reuters

X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform. 

The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much. 

The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms. 

“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.

“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”

It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.

Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively. 

X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.

Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.

Artificial intelligence training

X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.

“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.

Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually. 

X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.

“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.

Though X’s new terms may be expansive, Meta’s policies aren’t that different. 

The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center. 

Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.

Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training. 

“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.

Liquidated damages

Bluesky CEO: Our platform is 'radically different' from anything else in social media

Continue Reading

Technology

The Pentagon’s battle inside the U.S. for control of a new Cyber Force

Published

on

By

The Pentagon's battle inside the U.S. for control of a new Cyber Force

A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”

The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.

“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.

The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.

Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.

Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.

“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.

Concern with Congress and a changing White House

The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.

As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.

Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.

Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.

Cyber Command 2.0 and the U.S. military

John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.

Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.

In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.

Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.

Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.

Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.

Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.

“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.

He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.

Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.

Continue Reading

Trending