Connect with us

Published

on

Apple’s new Vision Pro virtual reality headset is displayed during Apple’s Worldwide Developers Conference (WWDC) at the Apple Park campus in Cupertino, California, on June 5, 2023.

Josh Edelson | Afp | Getty Images

For years, Apple avoided using the acronym AI when talking about its products. Not anymore.

The boom in generative artificial intelligence, spawned in late 2022 by OpenAI, has been the biggest story in the tech industry of late, lifting chipmaker Nvidia to a $3 trillion market cap and causing a major shifting of priorities at Microsoft, Google and Amazon, which are all racing to add the technology into their core services.

Investors and customers now want to see what the iPhone maker has in store.

New AI features are coming at Apple’s Worldwide Developers Conference (WWDC), which takes place on Monday at Apple’s campus in Cupertino, California. Apple CEO Tim Cook has teased “big plans,” a change of approach for a company that doesn’t like to talk about products before they’re released.

WWDC isn’t typically a major investor attraction. On the first day, the company announces annual updates to its iOS, iPadOS, WatchOS and MacOS software in what’s usually a two-hour videotaped keynote launch event emceed by Cook. This year, the presentation will be screened at Apple’s headquarters. App developers then get a week of parties and virtual workshops where they learn about the new Apple software.

Apple fans get a preview of the software coming to iPhones. Developers can get to work updating their apps. New hardware products, if they appear at all, are not the showcase.

But this year, everyone will be listening for the most hyped acronym in tech.

Apple: Loop Capital cuts the iPhone maker's price target on weak demand

With more than 1 billion iPhones in use, Wall Street wants to hear what AI features are going to make the iPhone more competitive against Android rivals and how the company can justify its investment in developing its own chips.

Investors have rewarded companies that show a clear AI strategy and vision. Nvidia, the primary maker of AI processors, has seen its stock price triple in the past year. Microsoft, which is aggressively incorporating OpenAI into its products, is up 28% over the past year. Apple is only up 9% over that same period, and has seen the other two companies surpass it in market cap.

“This is the most important event for Cook and Cupertino in over a decade,” Dan Ives, an analyst at Wedbush, told CNBC. “The AI strategy is the missing piece in the growth puzzle for Apple and this event needs to be a showstopper and not a shrug-the-shoulders event.”

Taking the stage will be executives including software chief Craig Federighi, who will likely address the real life uses of Apple’s AI, whether it should be run locally or in massive cloud clusters and what should be built into the operating system versus distributed in an app.

Privacy is also a key issue, and attendees will likely want to know how Apple can deploy the data-hungry technology without compromising user privacy, a centerpiece of the company’s marketing for over half a decade.

“At WWDC, we expect Apple to unveil its long-term vision around its implementation of generative AI throughout its diverse ecosystem of personal devices,” wrote Gil Luria, an analyst at D.A. Davidson, in a note this week. “We believe that the impact of generative AI to Apple’s business is one of the most profound in all of technology, and unlike much of the innovation in AI that’s impacting the developer or enterprise, Apple has a clear opportunity to reach billions of consumer devices with generative AI functionality.”

Upgrading Siri

Last month, OpenAI revealed a voice mode for its AI software called ChatGPT-4o.

In a short demo, OpenAI researchers held an iPhone and spoke directly to the bot inside the ChatGPT app, which was able to do impressions, speak fluidly and even sing. The conversation was snappy, the bot gave advice and the voice sounded like a human. Further demos at the live event showed the bot singing, teaching trigonometry, translating and telling jokes.

Apple users and pundits immediately understood that OpenAI had demoed a preview of what Apple’s Siri could be in the future. Apple’s voice assistant debuted in 2011 and since has gained a reputation for not being useful. It’s rigid, only able to answer a small proportion of well-defined queries, partially because it’s based on older machine learning techniques.

Apple could team up with OpenAI to upgrade Siri next week. It’s been discussing licensing chatbot technology from other companies, too, including Google and Cohere, according to a report from The New York Times.

Apple declined to comment on an OpenAI partnership.

One possibility is that Apple’s new Siri won’t compete directly with fully featured chatbots, but will improve its current features, and toss off queries that can only be answered by a chatbot to a partner. It’s close to how Apple’s Spotlight search and Siri work now. Apple’s system tries to answer the question, but if it can’t, it turns to Google. That agreement is part of a deal worth $18 billion per year to Apple.

Apple might also shy away from a full-throated embrace of an OpenAI partnership or chatbot. One reason is that a malfunctioning chatbot could generate embarrassing headlines, and could undermine the company’s emphasis on user privacy and personal control of user data.

“Data security will be a key advantage for the company and we expect them to spend time talking about their privacy efforts during the WWDC as well,” Citi analyst Atif Malik said in a recent note.

OpenAI’s technology is based on web scraping, and ChatGPT user interactions are used to improve the model itself, a technique that could violate some of Apple’s privacy principles.

Large language models like OpenAI’s still have problems with inaccuracies or “hallucinations,” like when Google’s search AI said last month that President Barack Obama was the first Muslim president. OpenAI CEO Sam Altman recently found himself in the middle of a thorny societal debate about deepfakes and deception when he denied accusations from actress Scarlett Johansson that OpenAI’s voice mode had ripped off her voice. It’s the kind of conflict that Apple executives prefer to avoid.

Efficient vs. large

Apple senior vice president of software engineering Craig Federighi speaks before the start of the Apple Worldwide Developers Conference at its headquarters on June 05, 2023 in Cupertino, California. Apple CEO Tim Cook kicked off the annual WWDC23 developer conference.

Justin Sullivan | Getty Images News | Getty Images

Outside of Apple, AI has become reliant on big server farms using powerful Nvidia processors paired with terabytes of memory to crunch numbers.

Apple, by contrast, wants its AI features to run on iPhones, and iPads, and Macs, which operate on battery power. Cook has highlighted Apple’s own chips as superior for running AI models.

“We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple Silicon with our industry-leading neural engines, and our unwavering focus on privacy,” Cook told investors in May on an earnings call.

Samik Chatterjee, an analyst at JPMorgan, wrote in a note this month that, “We expect Apple’s presentation at WWDC keynote to be focused on the features and the on-device capabilities as well as the GenAI models being run on-device to enable those features.”

In April, Apple published research about AI models it calls “efficient language models” that would be able to run on a phone. Microsoft is also publishing research on the same concept. One of Apple’s “OpenELM” models has 1.1 billion parameters, or weights — far smaller than OpenAI’s 2020 GPT-3 model which has 175 billion parameters, and smaller even than the 70 billion parameters in one version of Meta’s Llama, which is one of the most widely used language models.

In the paper, Apple’s researchers benchmarked the model on a MacBook Pro laptop running Apple’s M2 Max chip, showing that these efficient models don’t necessarily need to connect to the cloud. That can improve response speed, and provide a layer of privacy, because sensitive questions could be answered on the device itself, rather than being sent back to Apple servers.

Some of the features built into Apple’s software could include providing users a summary of their missed text messages, image generation for new emojis, code completing in the company’s development software Xcode, or drafting email responses, according to Bloomberg.

Apple could also decide to load up its M2 Ultra chips in its data centers to process AI queries that need more horsepower, Bloomberg reported.

Green bubbles and Vision Pro

A customer uses Apple’s Vision Pro headset at the Apple Fifth Avenue store in Manhattan in New York City, U.S., February 2, 2024. 

Brendan McDermid | Reuters

WWDC won’t strictly be about AI.

The company has more than 2.2 billion devices in use, and customers want improved software and new apps.

One potential upgrade could be Apple’s adoption of RCS, an improvement to the older system of text messaging known as SMS. Apple’s messages app diverts texts between iPhones to its own iMessage system, which displays conversations as blue bubbles. When an iPhone texts an Android phone, the bubble is green. Many features such as typing notifications aren’t available.

Google led development of RCS, adding encryption and other features to text messaging. Late last year Apple confirmed that it would add support for RCS alongside iMessage. The debut of iOS 18 would be the logical time to show its work.

The conference will also be the first anniversary of Apple’s reveal of the Vision Pro, its virtual and augmented reality headset, which was released in the U.S. in February. Apple could announce its expansion to more countries, including China and the U.K.

Apple said in its WWDC announcement that the Vision Pro would be in the spotlight. Vision Pro is currently on the first version of its operating system, and core features, such as its Persona videoconferencing simulation, are still in beta.

For users with a Vision Pro, Apple will offer some of its virtual sessions at the event in a 3D environment.

Don’t miss these exclusives from CNBC PRO

Big Tech's Alex Kantrowitz on the latest chip unveiling and Apple's WWDC

Continue Reading

Technology

OpenAI wins $200 million U.S. defense contract

Published

on

By

OpenAI wins 0 million U.S. defense contract

OpenAI CEO Sam Altman speaks during the Snowflake Summit in San Francisco on June 2, 2025.

Justin Sullivan | Getty Images News | Getty Images

OpenAI has been awarded a $200 million contract to provide the U.S. Defense Department with artificial intelligence tools.

The department announced the one-year contract on Monday, months after OpenAI said it would collaborate with defense technology startup Anduril to deploy advanced AI systems for “national security missions.”

“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said. It’s the first contract with OpenAI listed on the Department of Defense’s website.

Anduril received a $100 million defense contract in December. Weeks earlier, OpenAI rival Anthropic said it would work with Palantir and Amazon to supply its AI models to U.S. defense and intelligence agencies.

Sam Altman, OpenAI’s co-founder and CEO, said in a discussion with OpenAI board member and former National Security Agency leader Paul Nakasone at a Vanderbilt University event in April that “we have to and are proud to and really want to engage in national security areas.”

OpenAI did not immediately respond to a request for comment.

The Defense Department specified that the contract is with OpenAI Public Sector LLC, and that the work will mostly occur in the National Capital Region, which encompasses Washington, D.C., and several nearby counties in Maryland and Virginia.

Meanwhile, OpenAI is working to build additional computing power in the U.S. In January, Altman appeared alongside President Donald Trump at the White House to announce the $500 billion Stargate project to build AI infrastructure in the U.S.

The new contract will represent a small portion of revenue at OpenAI, which is generating over $10 billion in annualized sales. In March, the company announced a $40 billion financing round at a $300 billion valuation.

In April, Microsoft, which supplies cloud infrastructure to OpenAI, said the U.S. Defense Information Systems Agency has authorized the use of the Azure OpenAI service with secret classified information. 

WATCH: OpenAI hits $10 billion in annual recurring revenue

OpenAI hits $10 billion in annual recurring revenue

Continue Reading

Technology

Amazon Kuiper second satellite launch postponed by ULA due to rocket booster issue

Published

on

By

Amazon Kuiper second satellite launch postponed by ULA due to rocket booster issue

A United Launch Alliance Atlas V rocket is shown on its launch pad carrying Amazon’s Project Kuiper internet network satellites as the vehicle is prepared for launch at the Cape Canaveral Space Force Station in Cape Canaveral, Florida, U.S., April 28, 2025.

Steve Nesius | Reuters

United Launch Alliance on Monday was forced to delay the second flight carrying a batch of Amazon‘s Project Kuiper internet satellites because of a problem with the rocket booster.

With roughly 30 minutes left in the countdown, ULA announced it was scrubbing the launch due to an issue with “an elevated purge temperature” within its Atlas V rocket’s booster engine. The company said it will provide a new launch date at a later point.

“Possible issue with a GN2 purge line that cannot be resolved inside the count,” ULA CEO Tory Bruno said in a post on Bluesky. “We will need to stand down for today. We’ll sort it and be back.”

The launch from Florida’s Space Coast had been set for last Friday, but was rescheduled to Monday at 1:25 p.m. ET due to inclement weather.

Read more CNBC tech news

Amazon in April successfully sent up 27 Kuiper internet satellites into low Earth orbit, a region of space that’s within 1,200 miles of the Earth’s surface. The second voyage will send “another 27 satellites into orbit, bringing our total constellation size to 54 satellites,” Amazon said in a blog post.

Kuiper is the latest entrant in the burgeoning satellite internet industry, which aims to beam high-speed internet to the ground from orbit. The industry is currently dominated by Elon Musk’s Space X, which operates Starlink. Other competitors include SoftBank-backed OneWeb and Viasat.

Amazon is targeting a constellation of more than 3,000 satellites. The company has to meet a Federal Communications Commission deadline to launch half of its total constellation, or 1,618 satellites, by July 2026.

Don’t miss these insights from CNBC PRO

AWS CEO: Lots of opportunity to expand infrastructure globally

Continue Reading

Technology

Google issues apology, incident report for hourslong cloud outage

Published

on

By

Google issues apology, incident report for hourslong cloud outage

Thomas Kurian, CEO of Google Cloud, speaks at a cloud computing conference held by the company in 2019.

Michael Short | Bloomberg | Getty Images

Google apologized for a major outage that the company said was caused by multiple layers of flawed recent updates.

The company released an incident report late on Friday that explained hours of downtime on Thursday. More than 70 Google cloud services stopped working properly across the globe, knocking down or disrupting dozens of third-party services, including Cloudflare, OpenAI and Shopify. Gmail, Google Calendar, Google Drive, Google Meet and other first-party products also malfunctioned.

“We deeply apologize for the impact this outage has had,” Google wrote in the incident report. “Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward.”

Thomas Kurian, CEO of Google’s cloud unit, also posted about the outage in an X post on Thursday, saying “we regret the disruption this caused our customers.”

Google in May added a new feature to its “quota policy checks” for evaluating automated incoming requests, but the new feature wasn’t immediately tested in real-world situations, the company wrote in the incident report. As a result, the company’s systems didn’t know how to properly handle data from the new feature, which included blank entries. Those blank entries were then sent out to all Google Cloud data center regions, which prompted the crashes, the company wrote.

Engineers figured out the issue in 10 minutes, according to the company. However, the entire incident went on for seven hours after that, with the crash leading to an overload in some larger regions.

As it released the feature, Google did not use feature flags, an increasingly common industry practice that allows for slow implementation to minimize impact if problems occur. Feature flags would have caught the issue before the feature became widely available, Google said.

Going forward, Google will change its architecture so if one system fails, it can still operate without crashing, the company said. Google said it will also audit all systems and improve its communications “both automated and human, so our customers get the information they need asap to react to issues.” 

— CNBC’s Jordan Novet contributed to this report.

WATCH: Google buyouts highlight tech’s cost-cutting amid AI CapEx boom

Google buyouts highlight tech's cost-cutting amid AI CapEx boom

Continue Reading

Trending