Connect with us

Published

on

A customer tries on the Apple Vision Pro headset during the product launch at an Apple Store in New York City on Feb. 2, 2024.

Angela Weiss | Afp | Getty Images

The Vision Pro, the new virtual reality headset from Apple, can transport you to Hawaii or the surface of the moon.

It displays high-resolution computer graphics a few millimeters from the user’s eyes, all while allowing the user to control a desktop-like interface using their eyes and subtle hand gestures. The Vision Pro provides a preview of what using a computer could be like in five years, early adopters say.

The Vision Pro starts at $3,499. After adding storage and accessories such as straps, the whole package can cost as much as $4,500.

That’s a lot more expensive than competing headsets, such as Meta’s Quest 3, which starts at $499. It’s pricier than Meta’s high-end headset, the Quest Pro, which starts at $999. It’s also more expensive, even after controlling for inflation, than the first iPad ($499) or the first iPhone ($499 with a two-year contract).

The Vision Pro includes lots of pricey state-of-the-art parts. One estimate from research firm Omdia puts the “bill of materials” for the headset at $1,542, and that doesn’t include the costs of research and development, packaging, marketing or Apple’s profit margin.

The most expensive part in the headset is the 1.25 inch Sony Semiconductor display that goes in front of the user’s eye.

It’s a key component that helps the virtual experience feel more realistic than previous consumer headsets. The displays have a lot of pixels and lifelike colors, and are built with state-of-the-art manufacturing techniques.

Apple pays about $228 for the “Micro OLED” displays it uses, according to the Omdia estimate. Each Vision Pro needs two of them, one for each eye. Sony Semiconductor declined CNBC’s request to comment for this story.

The Vision Pro displays are the latest example of Apple embracing a new kind of display technology at a larger scale and earlier than the rest of the electronics industry.

Apple’s usage of LCD touchscreens for the first iPhone in 2007, and its later transition to organic LEDs or OLED displays with the iPhone X in 2017, upended existing supply chains and, after Apple shipped millions of units, ultimately drove the cost of the parts for the entire industry down.

Apple has a massive effect on the display industry, said Jacky Qiu, co-founder of OTI Lumionics, which makes materials for manufacturing micro LED panels. He said display makers fight for Apple’s business, which can be make or break for these companies.

“Apple is now the biggest player in terms of OLEDs, in terms of displays. They are the ones that are basically taking all the high-margin displays, all the stuff that is the high-spec type of stuff that is allowing the panel makers today to become profitable,” Qiu said.

“You look at the display business, you either work for Apple and make the iPhone screens and you’re profitable, or you don’t, and you lose money. It’s as brutal as that,” Qiu said.

Micro OLED

The Vision Pro’s displays are a defining feature. They’re packed with pixels and are sharper than any competing headset.

It’s one of the main points that Meta CEO Mark Zuckerberg complimented when comparing the $499 Quest 3 headset to Apple’s headset.

“Apple’s screen does have a higher resolution and that’s really nice,” Zuckerberg said in a video posted on his Instagram page, while saying that Quest’s screens are brighter.

“What’s so revolutionary about the OLED displays that are in the Vision Pro, the difference between Micro OLED and the OLED that you find on a television in your living room is that the pixels are actually a lot denser, they’re smaller and they’re more compact,” said Wayne Rickard, CEO of Terecircuits, a company that makes materials and techniques for display manufacturing.

An Apple Vision Pro headset is displayed during the product release at an Apple Store in New York City on Feb. 2, 2024.

Angela Weiss | AFP | Getty Images

According to a teardown analysis from repair firm iFixit, each Vision Pro display has a resolution of 3660 by 3200 pixels. That’s more pixels per eye than the iPhone 15, which has a screen resolution of 2556 by 1179 pixels. Meta’s Quest 3 comes in at a resolution of 2,064 by 2,208 per eye.

The Vision Pro’s screens are much smaller than the iPhone’s screen, which makes the pixels closer together, and more difficult to manufacture. The Vision Pro displays have 3,386 pixels per inch versus the iPhone 15, which has about 460 pixels per inch on its display.

In total, Apple says the Vision Pro’s displays have more than 23 million total pixels.

They’re some of the densest displays ever built. According to iFixit, 54 Vision Pro pixels can fit in a single iPhone pixel, and each pixel is about 7.5 microns from the next pixel, a measurement called “pixel pitch,” according to Apple’s specifications.

The Apple Vision Pro home screen.

Todd Haselton | CNBC

“With Micro LEDs in particular, it can get down to about below 10 microns. For comparison, a red blood cell might be about 20 microns, so half the size of a red blood cell,” Rickard said.

Apple opted for high-resolution displays so they’d be closer to simulating reality when using the headset’s passthrough mode, which uses outward-facing cameras to show video of the real world inside the headset. It also helps users read text or numbers in virtual reality. It helps remove the “screen door” effect of other headsets where you can see the pixels.

VR headsets need pixel-dense displays because the user’s eyes are so close to the screen. TVs have significantly fewer pixels, but it doesn’t matter because viewers are feet away.

The production of this kind of display requires cutting-edge manufacturing. For example, most displays are built on a backplane made out of glass. The Vision Pro displays are so pixel-dense that they use a silicon backplane, much like a semiconductor.

‘An incredible amount of technology packed into the product’

The new Apple Vision Pro headset is displayed during the Apple Worldwide Developers Conference in Cupertino, California, on June 5, 2023.

Justin Sullivan | Getty Images

The second most expensive part in the Vision Pro is the company’s main processor, which includes Apple’s M2 chip, the same chip it uses in the MacBook Air, and the R1 chip, which is a custom processor to handle video feeds and other sensors on the device.

Bill of materials estimates don’t take into account research and development costs, packaging or shipping. They also don’t take into account capital expenditures that can add up-front costs to big parts orders, but they’re useful for people in the manufacturing world to get an idea of how expensive the parts are in any given device.

Display technologies embraced by Apple typically come down in price after Apple makes them mainstream and as multiple suppliers compete for business.

“South Korean suppliers like Samsung Display and LG Display have shown their interest in this technology. Chinese suppliers like Seeya and BOE are also small-scale mass-produced [OLED on silicon] products,” said Jay Shao, Omdia analyst for displays, in an email. He expects the costs for Vision Pro spec screens to come down in the coming years.

Apple declined to comment, but Apple CEO Tim Cook is not a fan of cost estimates and teardowns. “I’ve never seen one that’s even close to accurate,” he said on an earnings call in 2015.

Apple doesn’t typically discuss its suppliers, but in February, Cook was asked about the device’s price tag on an earnings call.

“If you look at it from a price point of view, there’s an incredible amount of technology packed into the product,” Cook said.

He mentioned some of the most expensive parts in the device and emphasized the R&D costs that Apple spent developing it.

“There’s 5,000 patents in the product, and it’s built on many innovations that Apple has spent multiple years on from silicon to displays and significant AI and machine learning. All the hand tracking, the room mapping, all of this stuff is driven by AI, and so we’re incredibly excited about it,” Cook continued.

Don’t miss these stories from CNBC PRO:

Continue Reading

Technology

Musk says he does not support a merger between Tesla and xAI but backs investment

Published

on

By

Musk says he does not support a merger between Tesla and xAI but backs investment

Elon musk and the xAI logo.

Vincent Feuray | Afp | Getty Images

Elon Musk on Monday said he does not support a merger between xAI and Tesla, as questions swirl over the future relationship of the electric automaker and artificial intelligence company.

X account @BullStreetBets_ posted an open question to Tesla investors on the social media site asking if they support a merger between Tesla and xAI. Musk responded with “No.”

The statement comes as the tech billionaire contemplates the future relationship between his multiple businesses.

Overnight, Musk suggested that Tesla will hold a shareholder vote at an unspecified time on whether the automaker should invest in xAI, the billionaire’s company that develops the controversial Grok AI chatbot.

Last year, Musk asked his followers in an poll on social media platform X whether Tesla should invest $5 billion into xAI. The majority voted “yes” at the time.

Musk has looked to bring his various businesses closer together. In March, Musk merged xAI and X together in a deal that valued the artificial intelligence company at $80 billion and the social media company at $33 billion.

Musk also said last week that xAI’s chatbot Grok will be available in Tesla vehicles. The chatbot has come under criticism recently, after praising Adolf Hitler and posting a barrage of antisemitic comments.

CNBC’s Samantha Subin contributed to this report.

Continue Reading

Technology

Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less

Published

on

By

Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less

An AI sign at the MWC Shanghai tech show on June 19, 2025.

Bloomberg | Bloomberg | Getty Images

BEIJING — The latest Chinese generative artificial intelligence model to take on OpenAI’s ChatGPT is offering coding capabilities — at a lower price.

Alibaba-backed startup Moonshot released on late Friday night its Kimi K2 model: a low-cost, open source large language model — the two factors that underpinned China-based DeepSeek’s industry disruption in January. Open-source technology provides source code access for free, an approach that few U.S. tech giants have taken, other than Meta and Google to some extent.

Coincidentally, OpenAI CEO Sam Altman announced early Saturday that there would be an indefinite delay of its first open-source model yet again due to safety concerns. OpenAI did not immediately respond to a CNBC request for comment on Kimi K2.

Rethinking the AI coding payoff

One of Kimi K2’s strengths is in writing computer code for applications, an area in which businesses see potential to reduce or replace staff with generative AI. OpenAI’s U.S. rival Anthropic focused on coding with its Claude Opus 4 model released in late May.

In its release announcement on social media platforms X and GitHub, Moonshot claimed Kimi K2 surpassed Claude Opus 4 on two benchmarks, and had better overall performance than OpenAI’s coding-focused GPT-4.1 model, based on several industry metrics.

“No doubt [Kimi K2 is] a globally competitive model, and it’s open sourced,” Wei Sun, principal analyst in artificial intelligence at Counterpoint, said in an email Monday.

Cheaper option

“On top of that, it has lower token costs, making it attractive for large-scale or budget-sensitive deployments,” she said.

The new K2 model is available via Kimi’s app and browser interface for free unlike ChatGPT or Claude, which charge monthly subscriptions for their latest AI models.

Kimi is also only charging 15 cents for every 1 million input tokens, and $2.50 per 1 million output tokens, according to its website. Tokens are a way of measuring data for AI model processing.

In contrast, Claude Opus 4 charges 100 times more for input — $15 per million tokens — and 30 times more for output — $75 per million tokens. Meanwhile, for every one million tokens, GPT-4.1 charges $2 for input and $8 for output.

Moonshot AI said on GitHub that developers can use K2 however they wish, with the only requirement that they display “Kimi K2” on the user interface if the commercial product or service has more than 100 million monthly active users, or makes the equivalent of $20 million in monthly revenue.

Hot AI market

Initial reviews of K2 on both English and Chinese social media have largely been positive, although there are some reports of hallucinations, a prevalent issue in generative AI, in which the models make up information.

Still, K2 is “the first model I feel comfortable using in production since Claude 3.5 Sonnet,” Pietro Schirano, founder of startup MagicPath that offers AI tools for design, said in a post on X.

Moonshot has open sourced some of its prior AI models. The company’s chatbot surged in popularity early last year as China’s alternative to ChatGPT, which isn’t officially available in the country. But similar chatbots from ByteDance and Tencent have since crowded the market, while tech giant Baidu has revamped its core search engine with AI tools.

Kimi’s latest AI release comes as investors eye Chinese alternatives to U.S. tech in the global AI competition.

Still, despite the excitement about DeepSeek, the privately-held company has yet to announce a major upgrade to its R1 and V3 model. Meanwhile, Manus AI, a Chinese startup that emerged earlier this year as another DeepSeek-type upstart, has relocated its headquarters to Singapore.

Over in the U.S., OpenAI also has yet to reveal GPT-5.

Work on GPT-5 may be taking up engineering resources, preventing OpenAI from progressing on its open-source model, Counterpoint’s Sun said, adding that it’s challenging to release a powerful open-source model without undermining the competitive advantage of a proprietary model.

Grok 4 competitor

Kimi K2 is not the company’s only recent release. Moonshot launched a Kimi research model last month and claimed it matched Google’s Gemini Deep Research ‘s 26.9 score and beat OpenAI’s version on a benchmark called “Humanity’s Last Exam.”

The Kimi research model even got a mention last week during Elon Musk’s xAI release of Grok 4 — which scored 25.4 on its own on the “Humanity’s Last Exam” benchmark, but attained a 44.4 score when allowed to use a variety of AI tools and web search.

“Kimi-Researcher represents a paradigm shift in agentic AI,” said Winston Ma, adjunct professor at NYU School of Law. He was referring to AI’s capability of simultaneously making several decisions on its own to complete a complex task.

“Instead of merely generating fluent responses, it demonstrates autonomous reasoning at an expert level — the kind of complex cognitive work previously missing from LLMs,” Ma said. He is also author of “The Digital War: How China’s Tech Power Shapes the Future of AI, Blockchain and Cyberspace.”

— CNBC’s Victoria Yeo contributed to this report.

Continue Reading

Technology

Nvidia CEO downplays U.S. fears that China’s military will use his firm’s chips

Published

on

By

Nvidia CEO downplays U.S. fears that China's military will use his firm's chips

Co-founder and chief executive officer of Nvidia Corp., Jensen Huang attends the 9th edition of the VivaTech trade show in Paris on June 11, 2025.

Chesnot | Getty Images Entertainment | Getty Images

Nvidia CEO Jensen Huang has downplayed U.S. fears that his firm’s chips will aid the Chinese military, days ahead of another trip to the country as he attempts to walk a tightrope between Washington and Beijing. 

In an interview with CNN aired Sunday, Huang said “we don’t have to worry about” China’s military using U.S.-made technology because “they simply can’t rely on it.”

“It could be limited at any time; not to mention, there’s plenty of computing capacity in China already,” Huang said. “They don’t need Nvidia’s chips, certainly, or American tech stacks in order to build their military,” he added.

The comments were made in reference to years of bipartisan U.S. policy that placed restrictions on semiconductor companies, prohibiting them from selling their most advanced artificial intelligence chips to clients in China. 

Huang also repeated past criticisms of the policies, arguing that the tactic of export controls has been counterproductive to the ultimate goal of U.S. tech leadership. 

“We want the American tech stack to be the global standard … in order for us to do that, we have to be in search of all the AI developers in the world,” Huang said, adding that half of the world’s AI developers are in China. 

‘The Nvidia Way’ author Tae Kim: Jensen Huang always positions Nvidia ahead of the next big trend

That means for America to be an AI leader, U.S. technology has to be available to all markets, including China, he added.

Washington’s latest restrictions on Nvidia’s sales to China were implemented in April and are expected to result in billions in losses for the company. In May, Huang said chip restrictions had already cut Nvidia’s China market share nearly in half.

Huang’s CNN interview came just days before he travels to China for his second trip to the country this year, and as Nvidia is reportedly working on another chip that is compliant with the latest export controls.

Last week, the Nvidia CEO met with U.S. President Donald Trump, and was warned by U.S. lawmakers not to meet with companies connected to China’s military or intelligence bodies, or entities named on America’s restricted export list.

According to Daniel Newman, CEO of tech advisory firm The Futurum Group, Huang’s CNN interview exemplifies how Huang has been threading a needle between Washington and Beijing as it tries to maintain maximum market access.

“He needs to walk a proverbial tightrope to make sure that he doesn’t rattle the Trump administration,” Newman said, adding that he also wants to be in a position for China to invest in Nvidia technology if and when the policy provides a better climate to do so.

But that’s not to say that his downplaying of Washington’s concerns is valid, according to Newman. “I think it’s hard to completely accept the idea that China couldn’t use Nvidia’s most advanced technologies for military use.”

He added that he would expect Nvidia’s technology to be at the core of any country’s AI training, including for use in the development of advanced weaponry. 

A U.S. official told Reuters last month that China’s large language model startup DeepSeek — which says it used Nvidia chips to train its models — was supporting China’s military and intelligence operations. 

On Sunday, Huang acknowledged there were concerns about DeepSeek’s open-source R1 reasoning model being trained in China but said that there was no evidence that it presents dangers for that reason alone.

Huang complimented the R1 reasoning model, calling it “revolutionary,” and said its open-source nature has empowered startup companies, new industries, and countries to be able to engage in AI. 

“The fact of the matter is, [China and the U.S.] are competitors, but we are highly interdependent, and to the extent that we can compete and both aspire to win, it is fine to respect our competitors,” he concluded. 

Continue Reading

Trending