Connect with us

Published

on

A view of Apple’s new iPhone 16 at an Apple Store on the Regent Street in London, United Kingdom on September 20, 2024.

Rasid Necati Aslim | Anadolu | Getty Images

As Apple prepares Apple Intelligence to jump into Silicon Valley’s AI race, it’s relying on one of its strongest advantages: Its army of 34 million app developers.

IPhone users will get their first taste of Apple Intelligence, the company’s artificial intelligence system, later this month. The company is relying on Apple Intelligence to be the strongest selling point for the iPhone 16, its latest generation of smartphones.

Apple’s AI isn’t as advanced as the state of the art coming out of the most advanced labs, such as rivals like OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama. Apple isn’t using the biggest models, nor can it pull off some of the more show-stopping tricks of the bleeding-edge voice models — OpenAI’s latest can sing, for example.

Where Apple is hoping to distinguish its AI is that Siri may actually be able to do things on your phone — send emails, decipher calendars and take and edit photos. That’s something other company’s AI chatbots cannot currently do, and to accomplish this, Apple is beckoning its army of third-party developers to fine tune their apps to collaborate with Apple Intelligence. Eventually, Siri may be able to trigger any action in an app that a user can take, part of the company’s long term vision for Siri, Apple said in June.

“Siri will have the ability to take hundreds of new actions in and across apps,” said Apple’s Kelsey Peterson, director of machine learning, in the Apple Intelligence launch video.

Apple can easily make this happen for its own apps, but for Apple Intelligence to interact with the millions of non-Apple apps, it needs developers to embrace a new way of programming their apps. This means developers will need to create as many as hundreds of snippets of additional code called App Intents.

Apple has a strong history of getting its developers to support new platform initiatives, and it’s running a well-worn playbook to get them on board — personal attention from developer relations, a party-like atmosphere at the company’s annual developer’s conference and most importantly, it dangles App Store promotion that can lead to millions of downloads for developers who get on board.

If third-party developers jump on board and the Siri system works as advertised, it could represent one of Apple’s biggest and most durable advantages in the AI race.

“You should be able to string things together and kind of get that future we’ve all been envisioning where you can use Siri conversationally, to do a bunch of things at once,” said Jordan Morgan, an iOS developer who’s written a tutorial about App Intents.

Whether Apple is successful at cajoling its millions of developers is a critical question, and the stakes are high for the company. 

The company is relying on Apple Intelligence, which only works on last year’s iPhone 15 Pro or iPhone 16 models that came out this year, to spur a wave of upgrades and boost flat iPhone sales. If Apple’s improved Siri is poorly supported by developers or it fails to impress, it could cool iPhone sales, and customers could wind up choosing to use a rival’s voice assistant through an app instead of the built-in Siri.

Apple Intelligence photos

Apple Inc. 

What are App Intents?

Inside the Music app, for example, Apple has built about 10 intents, including actions like “Add to Playlist,” “Play Music,” or “Select Music.” A single app intent should define a single action, programmers say. 

If you take a caffeine tracking app, for example, one intent would be the ability to show an overview of exactly how much caffeine the user has logged today, Morgan said.

When that App Intent is finished, Apple’s various “system experiences,” such as widgets, live activities, control center and Shortcuts, will be able to quickly display a current running tracker of how much caffeine has been logged without the user ever opening up the tracking app.

System search is another big draw for some developers. App Intents will allow apps to surface specific emails or other more granular data inside Spotlight, Apple’s system search.

App Intents don’t take that long to write, developers say, often requiring only a few lines of code. 

In previous years, Apple recommended that developers adopt App Intents for their most important features, said Michael Tigas, the developer of Focused Work, a productivity app.

“Now, if there’s a way to adjust your app to perform any general action then you should create an App Intent for it,” Tigas said.

Fortunately for developers, they still have time to write all the code necessary for App Intents. While Apple Intelligence is starting to roll out next month, the biggest improvements to Siri aren’t scheduled to be released until next year.

Apple has to incentivize developers

Apple’s new Siri system will better understand questions even if a user makes a speaking error, a direct result of Apple’s work with language models, a relative of the large language models that power systems like OpenAI’s ChatGPT.

That means that Siri will be much more flexible in understanding the hundreds of different ways a user could phrase, for example, “apply a photo filter to an image I took yesterday.”

Apple has to train and test its model to understand the range of the most likely commands and questions for any given category of apps.

A downside to Apple’s approach is that only a few categories of apps will be supported by the new Siri at first, starting with photo and email apps. Eventually, Siri will support apps that focus on books, journaling, whiteboards, managing files, word processing, browsers, camera and photos, the company said.

Developers are already imagining how they might plan for users to interact with their apps with their voices.

A representative for Superhuman, a premium email app, told CNBC that it plans to use Apple’s AI system to enable questions about the contents of emails, such as “Hey Siri, when does my flight depart?” or “Hey Siri, when am I meeting with James to review his proposal?”

There’s a downside to Apple’s plan in the eyes of some developers who worry that users will spend less time inside their apps or confuse Apple Intelligence with the AI features they’ve built themselves.

“If this story were only about App Intents, developers would worry that their products might be reduced to the role of the plumbing that powers Siri, and leave them unclear on how to build sustainable businesses around it,” Igor Zhadanov, CEO under of Readdle, which makes email app Spark, wrote in an email.

Another drawback is that Apple Intelligence features will only be available on the latest iPhones, a small subset of the total iPhone user base. That limited market of iPhone users may discourage developers from investing time and effort into supporting the technology in the near term.

“Apple are limiting these kinds of Apple Intelligence features to the new 2024 iPhones and the expensive models from last year, so you won’t be able to build something for the masses anyway,” Tigas said.

WATCH: Apple Intelligence will mark a ‘renaissance of growth’ for the company, says Wedbush’s Dan Ives

Apple Intelligence will mark a 'renaissance of growth' for the company, says Wedbush's Dan Ives

Continue Reading

Technology

Cybersecurity firm Netskope files to go public on the Nasdaq

Published

on

By

Cybersecurity firm Netskope files to go public on the Nasdaq

Sanjay Beri, chief executive officer and founder of Netskope Inc., listens during a Bloomberg West television interview in San Francisco, California.

David Paul Morris | Bloomberg | Getty Images

Cloud security platform Netskope will go public on the Nasdaq under the ticker symbol “NTSK,” the company said in an initial public offering filing Friday.

The Santa Clara, California-based company said annual recurring revenue grew 33% to $707 million, while revenues jumped 31% to about $328 million in the first half of the year.

But Netskope isn’t profitable yet. The company recorded a $170 million net loss during the first half of the year. That narrowed from a $207 million loss a year ago.

Netskope joins an increasing number of technology companies adding momentum to the surge in IPO activity after high inflation and interest rates effectively killed the market.

So far this year, design software firm Figma more than tripled in its New York Stock Exchange debut, while crypto firm Circle soared 168% in its first trading day. CoreWeave has also popped since its IPO, while trading app eToro surged 29% in its May debut.

Read more CNBC tech news

Netskope’s offering also coincides with a busy period for cybersecurity deals.

The year’s two biggest technology deals include Alphabet’s $32 billion acquisition of Wiz and Palo Alto Networksambitious plan to buy Israeli identity security company CyberArk for $25 billion.

Founded in 2012, Netskope made a name for itself in its early years in the cloud access security broker space. The company lists Palo Alto Networks, Cisco, Zscaler, Broadcom and Fortinet as its major competitors.

Netskope’s biggest backers include Accel, Lightspeed Ventures and Iconiq, which recently benefited from Figma’s stellar debut.

Morgan Stanley and JPMorgan are leading the offering. Netskope listed 13 other Wall Street banks as underwriters.

Continue Reading

Technology

Meta set to unveil first consumer-ready smart glasses with a display, wristband next month

Published

on

By

Meta set to unveil first consumer-ready smart glasses with a display, wristband next month

Meta CEO Mark Zuckerberg makes a keynote speech at the Meta Connect annual event at the company’s headquarters in Menlo Park, Calif., on Sept. 25, 2024.

Manuel Orbegozo | Reuters

Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display, CNBC has learned.

That’s one of the two new devices Meta is planning to unveil at the event, according to people familiar with the matter. The company will also launch its first wristband that will allow users to control the glasses with hand gestures, the people said.

Connect is a two-day conference for developers focused on virtual reality, AR and the metaverse. It was originally called Oculus Connect and obtained its current moniker after Facebook changed its parent company name to Meta in 2021.

The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device, said the people, who asked not to be named because the details are confidential.

The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica, the people said. CNBC reported in October that Meta was working with Luxottica on consumer glasses with a display.

Meta declined to comment. Luxottica, which is based in France and Italy, didn’t respond to a request for comment.

Meta began selling smart glasses with Luxottica in 2021 when the two companies released the first-generation Ray-Ban Stories, which allowed users to take photos or videos using simple voice commands. The partnership has since expanded, and last year included the addition of advanced AI features that made the second generation of the product an unexpected hit with early adopters. 

Luxottica owns a number of glasses brands, including Ray-Ban, and licenses many others like Prada. It’s unclear what brand Luxottica will use for the glasses with AR, but a Meta job listing posted this week said the company is looking for a technical program manager for its “Wearables organization,” which “is responsible for the Ray-Ban AR glasses and other wearable hardware.”

In June, CNBC reported that Meta and Luxottica plan to release Prada-branded smart glasses. Prada glasses are known for having thick frames and arms, which could make them a suitable option for the Hypernova device, one of the people said. 

Meta Connect 2024 kicks off

Last year, Meta CEO Mark Zuckerberg used Connect to showcase the company’s experimental Orion AR glasses.

The Orion features AR capabilities on both lenses, capable of blending 3D digital visuals into the physical world, but the device served only as a prototype to show the public what could be possible with AR glasses. Still, Orion built some positive momentum for Meta, which since late 2020 has endured nearly $70 billion in losses from its Reality Labs unit that’s in charge of building hardware devices.

With Hypernova, Meta will finally be offering glasses with a display to consumers, but the company is setting low expectations for sales, some of the sources said. That’s because the device requires more components than its voice-only predecessors, and will be slightly heavier and thicker, the people said.

Meta and Ray-Ban have sold 2 million pairs of their second-generation glasses since 2023, Luxottica CEO Francesco Milleri said in February. In July, Luxottica said that revenue from sales of the smart glasses had more than tripled year over year.

As part of an extension agreement between Meta and Luxottica announced in September, Meta obtained a stake of about 3% in the glasses company according to Bloomberg. Meta also gets exclusive rights to Luxottica’s brands for its smart glasses technology for a number of years, a person familiar with the matter told CNBC in June.

Although Hypernova will feature a display, those visual features are expected to be limited, people familiar with the matter said. They said the color display will offer about a 20 degree field of view — meaning it will appear in a small window in a fixed position — and will be used primarily to relay simple bits of information, such as incoming text messages. 

Andrew Bosworth, Meta’s technology chief, said earlier this month that there are advantages to having just one display rather than two, including a lower price.

“Monocular displays have a lot going for them,” Bosworth said in an Instagram video. “They’re affordable, they’re lighter, and you don’t have disparity correction, so they’re structurally quite a bit easier.”

‘Interact with an AI assistant’

Other details of Meta’s forthcoming glasses were disclosed in a July letter from the U.S. Customs and Border Patrol to a lawyer representing Meta. While the letter redacted the name of the company and the product, a person with knowledge of the matter confirmed that it was in reference to Meta’s Hypernova glasses.

“This model will enable the user to take and share photos and videos, make phone calls and video calls, send and receive messages, listen to audio playback and interact with an AI assistant in different forms and methods, including voice, display, and manual interactions,” according to the letter, dated July 23.

The letter from CBP was part of routine communication between companies and the U.S. government when determining the country of origin for a consumer product. It refers to the product as “New Smart Glasses,” and says the device will feature “a lens display function that allows the user to interface with visual content arising from the Smart Features, and components providing image data retrieval, processing, and rendering capabilities.”

CBP didn’t provide a comment for this story.

The Hypernova glasses will also come paired with a wristband that will use technology built by Meta’s CTRL Labs, said people familiar with the matter. CTRL Labs, which Meta acquired in 2019, specializes in building neural technology that could allow users to control computing devices using gestures in their arms. 

The wristband is expected to be a key input component for the company’s future release of full AR glasses, so getting data now with Hypernova could improve future versions of the wristband, the people said. Instead of using camera sensors to track body movements, as with Apple’s Vision Pro headset, Meta’s wristband uses so-called sEMG sensor technology, which reads and interprets the electrical signals from hand movements.

One of the challenges Meta has faced with the wristband involves how people choose to wear it, a person familiar with the product’s development said. If the device is too loose, it won’t be able to read the user’s electrical signals as intended, which could impact its performance, the person said. Also, the wristband has run into issues in testing related to which arm it’s worn on, how it works on men versus women and how it functions on people who wear long sleeves.

The CTRL Labs team published a paper in Nature in July about its wristband, and Meta wrote about it in a blog post. In the paper, the Meta team detailed its use of machine learning technology to make the wristband work with as many people as possible. The additional data collected by the upcoming device should improve those capabilities for future Meta smart glasses.

“We successfully prototyped an sEMG wristband with Orion, our first pair of true augmented reality (AR) glasses, but that was just the beginning,” Meta wrote in the post. “Our teams have developed advanced machine learning models that are able to transform neural signals controlling muscles at the wrist into commands that drive people’s interactions with the glasses, eliminating the need for traditional—and more cumbersome—forms of input.”

Bloomberg reported the wristband component in January.

Meta has recently started reaching out to developers to begin testing both Hypernova and the accompanying wristband, people familiar with the matter said. The company wants to court third-party developers, particularly those who specialize in generative AI, to build experimental apps that Meta can showcase to drum up excitement for the smart glasses, the people said.

In addition to Hypernova and the wristband, Meta will also announce a third-generation of its voice-only smart glasses with Luxottica at Connect, one person said.

That device was also referenced by CBP in its July letter, referring to it as “The Next Generation Smart Glasses.” The glasses will include “components that provide capacitive touch functionality, allowing users to interact with the Smart Glasses through touch gestures,” the letter said.

WATCH: Elon Musk asked Zuckerberg to join xAI bit for OpenAI

Elon Musk asked Meta CEO Mark Zuckerberg to join xAI bid to buy OpenAI

Continue Reading

Technology

Google shares rise on report of Apple using Gemini for Siri

Published

on

By

Google shares rise on report of Apple using Gemini for Siri

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California on May 20, 2025.

Camille Cohen | Afp | Getty Images

Alphabet shares rose on a Friday report that Apple is in early discussions to use Google’s Gemini AI models for an updated version of the iPhone-maker’s Siri assistant.

The company’s shares rose more than 3% on the Bloomberg report, which said Apple recently inquired of Google about the potential for the search giant to build a custom AI model that would power a new Siri that could launch next year. Google’s flagship AI models Gemini have consistently been atop key benchmarks for artificial intelligence advancements while Apple has struggled to define its own AI strategy.

The reported talks come as Google faces potential risk to its lucrative search deals with Apple. This month, a U.S. judge is expected to rule on the penalties for Google’s alleged search monopoly, in which the Department of Justice recommending eliminating exclusionary agreements with third parties. For Google, that refers to its search position on Apple’s iPhone and Samsung devices — deals that cost the company billions of dollars a year in payouts.

The Android maker has said its Gemini models will become the default assistant on Android phones. Google this year has showed Gemini doing capabilities that go beyond Siri’s capabilities, such as summarizing videos. 

Craig Federighi, who oversees Apple’s operating systems, said at last year’s developer conference that the iPhone maker would like to add other AI models for specific purposes into its Apple Intelligence framework. Federighi specifically mentioned Google, whose Gemini can now hold conversations with users and handle input that comes from photos, videos, voice or text. Apple is also exploring partnerships with Anthropic and OpenAI as it tried to renew its AI roadmap, according to a June Bloomberg report.

Documents revealed during Google’s remedy trial showed executives from Apple were involved in the negotiations over using Google’s Gemini for a potential search option.

Google declined to comment. 

WATCH: Apple explores using Google Gemini AI to power revamped Siri, reports say

Apple explores using Google Gemini AI to power revamped Siri, reports say

Continue Reading

Trending