Apple’s iPhone 16 family of phones will hit shelves on Friday. Ahead of their launch, I’ve spent the past five days been testing the high-end iPhone 16 Pro Max.
It’s a great phone with cool updates like a dedicated camera button, and it charges faster over MagSafe than earlier Pro models. The screens are also slightly larger than prior versions.
But this review is tricky, because one of the banner features Apple has been hyping — on stage and in its new ads — is Apple Intelligence. It’s Apple’s suite of AI features for the iPhone, and it’s not coming until later this year.
There are reasons to be excited. A few of the new AI features, like changes to Siri, photo editing, and the option to have AI rewrite text for you, will launch in beta in October. More additions, such as as Apple’s image and emoji generator, more personal Siri responses and integration with ChatGPT, will come later.
I was able to test some of the beta features for this review. Others weren’t available. Those limitations make it difficult to provide a comprehensive assessment of the new device or to suggest whether the upgrade is worthwhile.
Apple shares slid earlier in the week after analysts suggested lighter demand for the iPhone 16 Pro models this year. TF Securities analyst Ming-Chi Kuo said the problem is that Apple Intelligence isn’t out at launch. Barclays also feared it may be because the Chinese language version of Apple Intelligence won’t launch until 2025.
Here’s what you need to know about the new iPhone 16 Pro Max, as of now.
The changes to know about
iPhone 16 Pro.
Apple Inc.
The biggest change you’ll notice is the new camera button. I’m still getting used to it after a few days, but I’m already defaulting to just pulling the phone out of my pocket, tapping the button and taking a picture.
My wife rightly asked me why I don’t just hit the camera button on the lock screen like on earlier iPhones. I don’t have a good answer for that. It just feels more natural to push a camera button.
I enjoyed doing a half-press to get camera controls like the zoom during my son’s first soccer game, though I found it was easier to sometimes just pinch to zoom. The new 48-megapixel wide-angle lens offers sharper images in zoomed-out shots that can capture more scenery.
Videographers will likely enjoy the 4K 120fps recording offered on the iPhone 16 Pro Max. Still, I try to keep my clips in lower quality because I’m sharing them over text messages with family and friends.
The iPhone 16 Pro Max has the best battery life of any iPhone yet. Apple’s new A18 Pro processor paired with a larger battery offers up to 33 hours of video playback, up from 29 hours on last year’s iPhone 15 Pro Max. I was usually able to make it to about dinnertime before needing to charge the 15, and I can make it to bedtime — or beyond — with the new phone depending on how much I’m using it.
I love that Apple increased the speed of its MagSafe charging. I used MagSafe when it came out but ultimately switched back to regular cable charging because it was quicker. Now, MagSafe gives up to a 50% charge in 30 minutes if you’re using a 30-watt charger (not included.)
The screens are slightly larger on this year’s Pro models. The iPhone 16 Pro Max moved from 6.7 inches to 6.9 inches. I didn’t notice a difference and could only tell when I put the two phones next to each other. It’s still a fantastic screen with a high refresh rate, which means scrolling is smooth. It’s colorful and bright and I love the always-on display for seeing notifications without picking up my phone. It’s not new this year but still useful and limited to the Pro models.
Apple Intelligence
Apple Intelligence photos
Apple Inc.
In the absence of Apple Intelligence at launch, I’m limited to testing a few beta features. They’re hit or miss, as to be expected in beta.
Apple Intelligence could help drive a new cycle of iPhone upgrades. Apple reported $39.3 billion in iPhone sales during the fiscal third quarter, about 46% of the company’s total revenue and down 1% from a year earlier. CEO Tim Cook said the segment grew on a constant currency basis.
I like email summaries provided by Apple Intelligence. They’re accurate and give you just a couple of lines that summarize what’s said or relayed in an email. This only works in Apple’s Mail app, though, so it won’t work if your company makes you use Outlook or if you prefer Gmail. Similarly, I found that Apple Intelligence accurately summarized long bits of text (including the introduction to this review) and returned an accurate snippet.
In notifications, it’s just OK. Summaries of news alerts were correct. Summaries of text messages sometimes were unnecessary. In one text from my wife, for example, Apple Intelligence suggested I threw a dinosaur at my daughter and made her cry before I apologized. In reality, my son was the culprit. The original text would have been sufficient.
In a daycare app notification that I use, Apple Intelligence did a good job summarizing that my daughter “took a nap, ate Cheerios, and is playing happily.” That would be a perfect amount of information to receive while driving.
Apple Intelligence photos
Apple Inc.
Another Apple Intelligence feature can help you create movie memories, which are little snippets of photos and videos set to music. In a TV ad, Apple shows a young woman using it to create memories of a dead goldfish with the help of Siri.
I couldn’t use Siri to create movies like that. Instead, I opened the Photos app, tapped Memories and wrote in a prompt asking for a photo memory of my son “learning to fish at Skytop set to a fishing tune.” It correctly showed pictures of a family trip to the Poconos but didn’t include any pictures of my son fishing there. The music was called “Fishing Tune” by Jiang Jiaqiang but didn’t sound like fishing music to me. Another test, asking for a photo memory of my son “playing soccer,” worked better but also included a picture of him as a baby with a football in his hands.
There’s also the whole new Siri interface that glows along the edges of the screen. I like the look compared to the globe, and it’s easier to type to Siri by tapping the screen indicator at the bottom of the display. Siri doesn’t feel drastically changed to me right now, although I liked that I could ask iPhone-specific questions like “How do I use my iPhone to scan a document?” and “How do I take a screen recording?” Siri presents the answer in a simple step-by-step guide at the top of the screen.
You can speak to Siri with interruptions now, too. So, if you get stumped while you’re thinking and say “umm” or “hold on a second,” you can continue to ask questions in the same line of thought, like “How tall is the Eiffel Tower?” and then follow with, “And when was it built?” But it doesn’t always work. I tried “How far is Boston?” for example, followed by, “And what’s the weather there?” Siri gave me the weather for my current location.
Apple Intelligence can be useful and I’m excited to see where it goes.
Apple iPhone 16
An attendee holds two iPhone 16s as Apple holds an event at the Steve Jobs Theater on its campus in Cupertino, California, on Sept. 9, 2024.
Manuel Orbegozo | Reuters
I focused this review on the iPhone 16 Pro Max. The iPhone 16 is slightly smaller and has a little less battery life but is otherwise identical. My colleague used the regular iPhone 16.
There are a few differences between the two. The iPhone 16 comes in more colors and is built out of aluminum instead of titanium like the higher-end Pro models. It also has the new camera button but lacks the higher refresh rate and the always-on features of the Pro model displays.
The iPhone 16 will support all of the Apple Intelligence features I’ve mentioned above, plus the ones that are still coming. Apple also upgraded the processor for faster performance and added a new macro camera mode for up-close pictures of objects, as well as support for capturing spatial images for the Apple Vision Pro headset. It offers up to 22 hours of video playback versus the 20 hours in last year’s iPhone 15.
Should you buy it?
The iPhone 16 Pro Max is a solid upgrade, but you’ll probably find the biggest changes if you’re coming from an iPhone 14 Pro Max or earlier. The biggest improvements over last year’s phone are the added camera button, a faster chip, new cameras and a slightly larger display.
When it comes to Apple Intelligence, we’ll all have to wait for features like using Siri to ask about prior calendar events, questions that require personal context, using Siri to control your apps, or Apple’s integration with ChatGPT. So if you’re buying now, it’s for everything but the AI.
Sanjay Beri, chief executive officer and founder of Netskope Inc., listens during a Bloomberg West television interview in San Francisco, California.
David Paul Morris | Bloomberg | Getty Images
Cloud security platform Netskope will go public on the Nasdaq under the ticker symbol “NTSK,” the company said in an initial public offering filing Friday.
The Santa Clara, California-based company said annual recurring revenue grew 33% to $707 million, while revenues jumped 31% to about $328 million in the first half of the year.
But Netskope isn’t profitable yet. The company recorded a $170 million net loss during the first half of the year. That narrowed from a $207 million loss a year ago.
Netskope joins an increasing number of technology companies adding momentum to the surge in IPO activity after high inflation and interest rates effectively killed the market.
So far this year, design software firm Figma more than tripled in its New York Stock Exchange debut, while crypto firm Circle soared 168% in its first trading day. CoreWeave has also popped since its IPO, while trading app eToro surged 29% in its May debut.
Read more CNBC tech news
Netskope’s offering also coincides with a busy period for cybersecurity deals.
Founded in 2012, Netskope made a name for itself in its early years in the cloud access security broker space. The company lists Palo Alto Networks, Cisco, Zscaler, Broadcom and Fortinet as its major competitors.
Netskope’s biggest backers include Accel, Lightspeed Ventures and Iconiq, which recently benefited from Figma’s stellar debut.
Morgan Stanley and JPMorgan are leading the offering. Netskope listed 13 other Wall Street banks as underwriters.
Meta CEO Mark Zuckerberg makes a keynote speech at the Meta Connect annual event at the company’s headquarters in Menlo Park, Calif., on Sept. 25, 2024.
Manuel Orbegozo | Reuters
Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display, CNBC has learned.
That’s one of the two new devices Meta is planning to unveil at the event, according to people familiar with the matter. The company will also launch its first wristband that will allow users to control the glasses with hand gestures, the people said.
Connect is a two-day conference for developers focused on virtual reality, AR and the metaverse. It was originally called Oculus Connect and obtained its current moniker after Facebook changed its parent company name to Meta in 2021.
The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device, said the people, who asked not to be named because the details are confidential.
The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica, the people said. CNBC reported in October that Meta was working with Luxottica on consumer glasses with a display.
Meta declined to comment. Luxottica, which is based in France and Italy, didn’t respond to a request for comment.
Meta began selling smart glasses with Luxottica in 2021 when the two companies released the first-generation Ray-Ban Stories, which allowed users to take photos or videos using simple voice commands. The partnership has since expanded, and last year included the addition of advanced AI features that made the second generation of the product an unexpected hit with early adopters.
Luxottica owns a number of glasses brands, including Ray-Ban, and licenses many others like Prada. It’s unclear what brand Luxottica will use for the glasses with AR, but a Meta job listing posted this week said the company is looking for a technical program manager for its “Wearables organization,” which “is responsible for the Ray-Ban AR glasses and other wearable hardware.”
In June, CNBC reported that Meta and Luxottica plan to release Prada-branded smart glasses. Prada glasses are known for having thick frames and arms, which could make them a suitable option for the Hypernova device, one of the people said.
Last year, Meta CEO Mark Zuckerberg used Connect to showcase the company’s experimental Orion AR glasses.
The Orion features AR capabilities on both lenses, capable of blending 3D digital visuals into the physical world, but the device served only as a prototype to show the public what could be possible with AR glasses. Still, Orion built some positive momentum for Meta, which since late 2020 has endured nearly $70 billion in losses from its Reality Labs unit that’s in charge of building hardware devices.
With Hypernova, Meta will finally be offering glasses with a display to consumers, but the company is setting low expectations for sales, some of the sources said. That’s because the device requires more components than its voice-only predecessors, and will be slightly heavier and thicker, the people said.
Meta and Ray-Ban have sold 2 million pairs of their second-generation glasses since 2023, Luxottica CEO Francesco Milleri said in February. In July, Luxottica said that revenue from sales of the smart glasses had more than tripled year over year.
As part of an extension agreement between Meta and Luxottica announced in September, Meta obtained a stake of about 3% in the glasses company according to Bloomberg. Meta also gets exclusive rights to Luxottica’s brands for its smart glasses technology for a number of years, a person familiar with the matter told CNBC in June.
Although Hypernova will feature a display, those visual features are expected to be limited, people familiar with the matter said. They said the color display will offer about a 20 degree field of view — meaning it will appear in a small window in a fixed position — and will be used primarily to relay simple bits of information, such as incoming text messages.
Andrew Bosworth, Meta’s technology chief, said earlier this month that there are advantages to having just one display rather than two, including a lower price.
“Monocular displays have a lot going for them,” Bosworth said in an Instagram video. “They’re affordable, they’re lighter, and you don’t have disparity correction, so they’re structurally quite a bit easier.”
‘Interact with an AI assistant’
Other details of Meta’s forthcoming glasses were disclosed in a July letter from the U.S. Customs and Border Patrol to a lawyer representing Meta. While the letter redacted the name of the company and the product, a person with knowledge of the matter confirmed that it was in reference to Meta’s Hypernova glasses.
“This model will enable the user to take and share photos and videos, make phone calls and video calls, send and receive messages, listen to audio playback and interact with an AI assistant in different forms and methods, including voice, display, and manual interactions,” according to the letter, dated July 23.
The letter from CBP was part of routine communication between companies and the U.S. government when determining the country of origin for a consumer product. It refers to the product as “New Smart Glasses,” and says the device will feature “a lens display function that allows the user to interface with visual content arising from the Smart Features, and components providing image data retrieval, processing, and rendering capabilities.”
CBP didn’t provide a comment for this story.
The Hypernova glasses will also come paired with a wristband that will use technology built by Meta’s CTRL Labs, said people familiar with the matter. CTRL Labs, which Meta acquired in 2019, specializes in building neural technology that could allow users to control computing devices using gestures in their arms.
The wristband is expected to be a key input component for the company’s future release of full AR glasses, so getting data now with Hypernova could improve future versions of the wristband, the people said. Instead of using camerasensors to track body movements, as with Apple’s Vision Pro headset, Meta’s wristband uses so-called sEMG sensortechnology, which reads and interprets the electrical signals from hand movements.
One of the challenges Meta has faced with the wristband involves how people choose to wear it, a person familiar with the product’s development said. If the device is too loose, it won’t be able to read the user’s electrical signals as intended, which could impact its performance, the person said. Also, the wristband has run into issues in testing related to which arm it’s worn on, how it works on men versus women and how it functions on people who wear long sleeves.
The CTRL Labs team published a paper in Nature in July about its wristband, and Meta wrote about it in a blog post. In the paper, the Meta team detailed its use of machine learning technology to make the wristband work with as many people as possible. The additional data collected by the upcoming device should improve those capabilities for future Meta smart glasses.
“We successfully prototyped an sEMG wristband with Orion, our first pair of true augmented reality (AR) glasses, but that was just the beginning,” Meta wrote in the post. “Our teams have developed advanced machine learning models that are able to transform neural signals controlling muscles at the wrist into commands that drive people’s interactions with the glasses, eliminating the need for traditional—and more cumbersome—forms of input.”
Bloomberg reported the wristband component in January.
Meta has recently started reaching out to developers to begin testing both Hypernova and the accompanying wristband, people familiar with the matter said. The company wants to court third-party developers, particularly those who specialize in generative AI, to build experimental apps that Meta can showcase to drum up excitement for the smart glasses, the people said.
In addition to Hypernova and the wristband, Meta will also announce a third-generation of its voice-only smart glasses with Luxottica at Connect, one person said.
That device was also referenced by CBP in its July letter, referring to it as “The Next Generation Smart Glasses.” The glasses will include “components that provide capacitive touch functionality, allowing users to interact with the Smart Glasses through touch gestures,” the letter said.
Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California on May 20, 2025.
Camille Cohen | Afp | Getty Images
Alphabet shares rose on a Friday report that Apple is in early discussions to use Google’s Gemini AI models for an updated version of the iPhone-maker’s Siri assistant.
The company’s shares rose more than 3% on the Bloomberg report, which said Apple recently inquired of Google about the potential for the search giant to build a custom AI model that would power a new Siri that could launch next year. Google’s flagship AI models Gemini have consistently been atop key benchmarks for artificial intelligence advancements while Apple has struggled to define its own AI strategy.
The reported talks come as Google faces potential risk to its lucrative search deals with Apple. This month, a U.S. judge is expected to rule on the penalties for Google’s alleged search monopoly, in which the Department of Justice recommending eliminating exclusionary agreements with third parties. For Google, that refers to its search position on Apple’s iPhone and Samsung devices — deals that cost the company billions of dollars a year in payouts.
The Android maker has said its Gemini models will become the default assistant on Android phones. Google this year has showed Gemini doing capabilities that go beyond Siri’s capabilities, such as summarizing videos.
Craig Federighi, who oversees Apple’s operating systems, said at last year’s developer conference that the iPhone maker would like to add other AI models for specific purposes into its Apple Intelligence framework. Federighi specifically mentioned Google, whose Gemini can now hold conversations with users and handle input that comes from photos, videos, voice or text. Apple is also exploring partnerships with Anthropic and OpenAI as it tried to renew its AI roadmap, according to a June Bloomberg report.
Documents revealed during Google’s remedy trial showed executives from Apple were involved in the negotiations over using Google’s Gemini for a potential search option.