The interior of a Tesla Model S is shown in autopilot mode in San Francisco, California, U.S., April 7, 2016.
Alexandria Sage | Reuters
As part of its investigation into Tesla Autopilot safety issues, the National Highway Traffic Safety Administration on Monday requested data from 12 other big automakers about their driver assistance systems.
The agency plans a comparative analysis between the systems Tesla and its competitors offer, as well as the practices they each used to develop, test, and track the safety of their driver assistance packages. If NHTSA determines any vehicle (or component or system) has a flawed design or safety defect, the agency has the power to mandate recalls.
Some of these brands are primary Tesla competitors with popular models in the growing battery electric segment of the automotive market, especially Kia and Volkswagen in Europe.
Tesla CEO Elon Musk has consistently touted Autopilot as technology that makes his company’s electric cars far less likely to be involved in an accident than others.
In April, he wrote on Twitter: “Tesla with Autopilot engaged now approaching 10 times lower chance of accident than average vehicle.”
Now, the feds will compare Tesla’s entire approach and Autopilot design to other automakers’ practices and driver assistance systems.
NHTSA possible software recall for Tesla Autopilot if NHTSA finds flaws in the system.
Results of this investigation could lead not only to a software recall for Tesla Autopilot, but also to a broader regulatory crack-down on automakers and the way they must develop and track the use of automated driving features like traffic-aware cruise control or collision avoidance.
As CNBC previously reported, NHTSA originally initiated its probe of Tesla Autopilot after a series of crashes between Tesla vehicles and first responder vehicles left 17 people injured and 1 person dead. It recently added another collision to the list that involved a Tesla that veered off the road in Orlando, and narrowly missed hitting a police officer who was assisting another motorist on the side of the road.
Bitcoin was far and away the best-performing asset class in 2024 as new exchange-traded funds ushered in more widespread adoption and hopes for deregulation under a new presidential administration lifted digital assets to record levels.
But owning cryptocurrency also came with its usual unpredictability and dizzying swings, as this month’s trading clearly illustrates. Bitcoin has more than doubled in price since starting the year in the $40,000 range, with it last trading near $95,500. Ether has scored a nearly 50% year-to-date gain, and last traded at around the $3,400 level.
Stock Chart IconStock chart icon
Bitcoin and ether since the start of 2024
The most prosperous stretch of the year occurred in the weeks following the U.S. presidential election. By mid-December, the cryptocurrency had rocketed above $108,000 for the first time, fueled by optimism that President-elect Donald Trump‘s victory over Vice President Kamala Harris would open the door for greater regulatory clarity and send new money rushing into the sector.
Since then, however, prices have eased. Bitcoin is negative for the month, hurt by the expectation that the Federal Reserve’s rate cuts will roll out at a slower-than-anticipated pace. The market has also faced a stretch of apparent profit-taking and choppiness into the end of the year.
The year began with a strong boost of confidence from the introduction in January of new ETFs that hold the cryptocurrency. The funds, which are pitched by asset managers as a simpler way for investors to access bitcoin, have pulled in tens of billions of dollars of cash this year. The iShares Bitcoin Trust ETF (IBIT) now has more than $50 billion in assets.
Stock Chart IconStock chart icon
Microstrategy shares this year
Ether ETFs joined the excitement in July. The demand for those funds has not been as strong as for their bitcoin counterparts, but the category has still attracted more than $2 billion in net inflows in less than six months, according to FactSet.
Strong tail winds for cryptocurrencies also lifted connected stocks to record levels. Bitcoin proxy Microstrategy has surged 388% since the start of the year, while Coinbase and Robinhood have rallied about 47% and 200%, respectively. MicroStrategy shares have surged since mid-December as the company was added into the Nasdaq 100 index.
Some mining stocks, however, haven’t performed as well, with Mara Holdings and Riot Platforms on track for double-digit year-to-date losses. The drop in mining stocks may be a direct result of this year’s bitcoin halving, which reduced the block rewards. Along with transaction fees, this is one of the most significant ways miners make money.
— CNBC’s Jesse Pound contributed reporting.
Don’t miss these cryptocurrency insights from CNBC Pro:
Hock Tan, CEO of Broadcom (L) and former CEO of Intel, Pat Gelsinger.
Reuters | CNBC
It was a big year for silicon in Silicon Valley — but a brutal one for the company most responsible for the area’s moniker.
Intel, the 56-year-old chipmaker co-founded by industry pioneers Gordon Moore and Robert Noyce and legendary investor Arthur Rock, had its worst year since going public in 1971, losing 61% of its value.
The opposite story unfolded at Broadcom, the chip conglomerate run by CEO Hock Tan and headquartered in Palo Alto, California, about 15 miles from Intel’s Santa Clara campus.
Broadcom’s stock price soared 111% in 2024 as of Monday’s close, its best performance ever. The current company is the product of a 2015 acquisition by Avago, which went public in 2009.
The driving force behind the diverging narratives was artificial intelligence. Broadcom rode the AI train, while Intel largely missed it. The changing fortunes of the two chipmakers underscores the fleeting nature of leadership in the tech industry and how a few key decisions can result in hundreds of billions — or even trillions — of dollars in market cap shifts.
Broadcom develops custom chips for Google and other huge cloud companies. It also makes essential networking gear that large server clusters need to tie thousands of AI chips together. Within AI, Broadcom has largely been overshadowed by Nvidia, whose graphics processing units, or GPUs, power most of the large language models being developed at OpenAI, Microsoft, Google and Amazon and also enable the heftiest AI workloads.
Despite having a lower profile, Broadcom’s accelerator chips, which the company calls XPUs, have become a key piece of the AI ecosystem.
“Why it’s really shooting up is because they’re talking about AI, AI, AI, AI,” Eric Ross, chief investment strategist at Cascend, told CNBC’s “Squawk Box” earlier this month.
Intel, which for decades was the dominant U.S. chipmaker, has been mostly shut out of AI. Its server chips lag far behind Nvidia’s, and the company has also lost market share to longtime rival Advanced Micro Devices while spending heavily on new factories.
Intel’s board ousted Pat Gelsinger from the CEO role on Dec. 1, after a tumultuous four-year tenure.
“I think someone more innovative might have seen the AI wave coming,” Paul Argenti, professor of management at Dartmouth’s Tuck School of Business, said in an interview on “Squawk Box” after the announcement.
An Intel spokesperson declined to comment.
Broadcom is now worth about $1.1 trillion and is the eighth U.S. tech company to cross the trillion-dollar mark. It’s the second most valuable chip company, behind Nvidia, which has driven the AI boom to a $3.4 trillion valuation, trailing only Apple among all public companies. Nvidia’s stock price soared 178% this year, but actually did better in 2023, when it gained 239%.
Until four years ago, Intel was the world’s most valuable chipmaker, nearing a $300 billion market cap in early 2020. The company is now worth about $85 billion, just got booted off the Dow Jones Industrial Average — replaced by Nvidia — and has been in talks to sell off core parts of its business. Intel now ranks 15th in market cap among semiconductor companies globally.
‘Not meant for everybody’
Following the Avago-Broadcom merger in 2015, the combined company’s biggest business was chips for TV set-top boxes and broadband routers. Broadcom still makes Wi-Fi chips used in laptops as well as the iPhone and other smartphones.
After a failed bid to buy mobile chip giant Qualcomm in 2018, Broadcom turned its attention to software companies. The capstone of its spending spree came in 2022 with the announced acquisition of server virtualization software vendor VMware for $61 billion. Software accounted for 41% of Broadcom’s $14 billion in revenue in the most recent quarter, thanks in part to VMware.
What’s exciting Wall Street is Broadcom’s role working with cloud providers to build custom chips for AI. The company’s XPUs are generally simpler and less expensive to operate than Nvidia’s GPUs, and they’re designed to run specific AI programs efficiently.
Cloud vendors and other large internet companies are spending billions of dollars a year on Nvidia’s GPUs so they can build their own models and run AI workloads for customers. Broadcom’s success with custom chips is setting up an AI spending showdown with Nvidia, as hyperscale cloud companies look to differentiate their products and services from their rivals.
Broadcom’s chips aren’t for everyone, as only a handful of companies can afford to design and build their own custom processors.
“You have to be a Google, you have to be a Meta, you have to be a Microsoft or an Oracle to be able to use those chips,” Piper Sandler analyst Harsh Kumar told CNBC’s “Squawk on the Street” on Dec. 13, a day after Broadcom’s earnings. “These chips are not meant for everybody.”
While 2024 has been a breakout year for Broadcom — AI revenue increased 220% — the month of December has put it in record territory. The stock is up 45% for the month as of Monday’s close, 16 percentage points better than its prior best month.
On the company’s earnings call on Dec. 12, Tan told investors that Broadcom had doubled shipments of its XPUs to its three hyperscale providers. The most well known of the bunch is Google, which counts on the technology for its Tensor Processing Units, or TPUs, used to train Apple’s AI software released this year. The other two customers, according to analysts, are TikTok parent ByteDance and Meta.
Tan said that within about two years, companies could spend between $60 billion and $90 billion on XPUs.
“In 2027, we believe each of them plans to deploy 1 million XPU clusters across a single fabric,” Tan said of the three hyperscale customers.
In addition to AI chips, AI server clusters need powerful networking parts to train the most advanced models. Networking chips for AI accounted for 76% of Broadcom’s $4.5 billion of networking sales in the fourth quarter.
Broadcom said that, in total, about 40% of its $30.1 billion in 2024 semiconductor sales were related to AI, and that AI revenue would increase 65% in the first quarter to $3.8 billion.
“The degree of success amongst the hyperscalers in their initiatives here is clearly an area up for debate,” Cantor analyst C.J. Muse, who recommends buying Broadcom shares, wrote in a report on Dec. 18. “But any way you slice it, the focus here will continue to be a meaningful boon for those levered to custom silicon.”
Intel’s very bad year
Prior to 2024, Intel’s worst year on the market was 1974, when the stock sank 57%.
The seeds for the company’s latest stumbles were planted years ago, as Intel missed out on mobile chips to Qualcomm, ARM and Apple.
Rival AMD started taking market share in the critical PC and server CPU markets thanks to its productive manufacturing relationship with Taiwan Semiconductor Manufacturing Company. Intel’s manufacturing process has been a notch behind for years, leading to slower and less power-efficient central processing units, or CPUs.
But Intel’s most costly whiff is in AI — and it’s a big reason Gelsinger was removed.
Nvidia’s GPUs, originally created for video games, have become the critical hardware in the development of power-hungry AI models. Intel’s CPU, formerly the most important and expensive part in a server, has become an afterthought in an AI server. The GPUs Nvidia will ship in 2025 don’t even need an Intel CPU — many of them are paired to an Nvidia-designed ARM-based chip.
As Nvidia has reported revenue growth of at least 94% for the past six quarters, Intel has been forced into downsizing mode. Sales have declined in nine of the past 11 periods. Intel announced in August that it was cutting 15,000 jobs, or about 15% of its workforce.
“We are working to create a leaner, simpler, more agile Intel,” board Chair Frank Yeary said in a Dec. 2 press release announcing Gelsinger’s departure.
A big problem for Intel is that it lacks a comprehensive AI strategy. It’s touted the AI capabilities on its laptop chips to investors, and released an Nvidia competitor called Gaudi 3. But neither the company’s AI PC initiative nor its Gaudi chips have gained much traction in the market. Intel’s Gaudi 3 sales missed the company’s own $500 million target for this year.
Late next year, Intel will release a new AI chip that it codenamed Falcon Shores. It won’t be built on Gaudi 3 architecture, and will instead be a GPU.
“Is it going to be wonderful? No, but it is a good first step in getting the platform done,” Intel interim co-CEO Michelle Holthaus said at a financial conference held by Barclays on Dec. 12.
Holthaus and fellow interim co-CEO David Zinsner have vowed to focus on Intel’s products, leaving the fate of Intel’s costly foundry division unclear.
Before he left, Gelsinger championed a strategy that involved Intel both finding its footing in the semiconductor market and manufacturing chips to compete with TSMC. In June, at a conference in Taipei, Gelsinger told CNBC that when its factories get up and running, Intel wanted to build “everybody’s AI chips,” and give companies such as Nvidia and Broadcom an alternative to TSMC.
Intel said in September that it plans to turn its foundry business into an independent unit with its own board and the potential to raise outside capital. But for now, Intel’s primary client is Intel. The company said it didn’t expect meaningful sales from external customers until 2027.
At the Barclays event this month, Zinsner said the separate board for the foundry business is “getting stood up today.” More broadly, he indicated that the company is looking to remove complexity and associated costs wherever possible.
“We are going to constantly be scrutinizing where we’re spending money, making sure that we’re getting the appropriate return,” Zinsner said.
The World Artificial Intelligence Conference in Shanghai in July 2023.
Aly Song | Reuters
Alibaba is cutting prices on its large language models by up to 85%, the Chinese tech giant announced Tuesday.
The Hangzhou-based e-commerce firm’s cloud computing division, Alibaba Cloud, said in a WeChat post that it’s offering the price cuts on its visual language model, Qwen-VL, which is designed to perceive and understand both texts and images.
Shares of Alibaba didn’t move much on the announcement, closing 0.5% higher on the final trading day of the year in Hong Kong.
Nevertheless, the price cuts demonstrate how the race among China’s technology giants to win more business for their nascent artificial intelligence products is intensifying.
Major Chinese tech firms including Alibaba, Tencent, Baidu, JD.com, Huawei and TikTok parent company Bytedance have all launched their own large language models over the past 18 months, looking to capitalize on the hype around the technology.
It’s not the first time Alibaba has announced price cuts to incentivize businesses to use its AI products. In February, the company announced price reductions of as much as 55% on a wide range of core cloud products. More recently, in May, the company reduced prices on its Qwen AI model by as much as 97% in a bid to boost demand.
Large language models, or LLMs for short, are AI models that are trained on vast quantities of data to generate humanlike responses to user queries and prompts. They are the bedrock for today’s generative AI systems, like Microsoft-backed startup OpenAI’s popular AI chatbot, ChatGPT.
In Alibaba’s case, the company is focusing its LLM efforts on the enterprise segment rather than launching a consumer AI chatbot like OpenAI’s ChatGPT. In May, the company said its Qwen models have been deployed by over 90,000 enterprise users.