Matt Garman, CEO of Amazon Web Services, speaks during The Wall Street Journal’s Tech Live conference in Laguna Beach, California, on Oct. 21, 2024.
Frederic J. Brown | AFP | Getty Images
Amazon said revenue in its cloud unit increased 19% in the third quarter, just missing analyst estimates.
Revenue at Amazon Web Services totaled $27.45 billion, according to a statement Thursday, while Wall Street was expecting $27.52 billion, based on StreetAccount estimates. Year-over-year growth has accelerated for five consecutive quarters.
The artificial intelligence portion of AWS is in the billions of dollars in annualized revenue, more than doubling year over year, Amazon CEO Andy Jassy, who previously led AWS, said on a call with analysts.
“I believe we have more demand than we could fulfill if we had even more capacity today,” Jassy said. “I think pretty much everyone today has less capacity than they have demand for, and it’s really primarily chips that are the area where companies could use more supply.”
AWS leads the cloud infrastructure market over Google and Microsoft and is an important source of profit for Amazon.
On Tuesday, Google parent Alphabet said revenue from Google Cloud, which includes cloud applications as well as infrastructure, totaled $11.35 billion, up 35%. Microsoft said Wednesday that revenue from Azure and other cloud services grew 33%.
AWS recorded $10.45 billion in operating income, representing 60% of its parent’s profit. Analysts expected $9.15 billion.
The unit’s operating margin came in at 38%, the widest for AWS since at least 2014. Google Cloud reported an operating margin of 17%.
“We’re being very measured in our hiring,” Brian Olsavsky, Amazon’s finance chief, said on the call.
“If this is successful, we would love to find more pieces of their application stack that could run well in AWS and help customers do that,” AWS CEO Matt Garman told CNBC in a September interview.
Also in the quarter, AWS announced plans to discontinue some services, including code-repository tool CodeCommit. Garman told TechCrunch that AWS “can’t invest in everything.”
A group of prominent figures, including artificial intelligence and technology experts, has called for an end to efforts to create ‘superintelligence’ — a form of AI that would surpass human intellect.
More than 800 people, including Apple cofounder Steve Wozniak and former U.S. National Security Advisor Susan Rice, signed a statement published Wednesday calling for a pause on the development of superintelligence.
In a statement published Wednesday, with over 800 signatories, including prominent AI figures and the biggest names in AI, ranging from Apple cofounder Steve Wozniak to former National Security Advisor Susan Rice, called for a pause on the development of superintelligence.
The list of signatories notably includes prominent AI leaders, including scientists like Yoshua Bengio and Geoff Hinton, who are widely considered “godfathers” of modern AI. Leading AI safety researchers like UC Berkeley’s Stuart Russell also signed on.
Superintelligence has become a buzzword in the AI world, as companies from xAI to OpenAI compete to release more advanced large language models. Meta notably has gone so far as to name its LLM division the ‘Meta Superintelligence Labs.’
But signatories of the recent statement warn that the prospect of superintelligence has “raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”
The statement calls for a prohibition on superintelligence development until strong public buy-in and a broad scientific consensus that it can be done safely and controllably is reached.
In addition to the AI figures, the names behind the statement come from a broad coalition of academics, media personalities, religious leaders and ex-politicians.
Other prominent names include Virgin’s Richard Branson, former chairman of the Joint Chiefs of Staff Mike Mullen, and British royal family member Meghan Markle. Prominent media allies to the U.S. President Donald Trump, including Steve Bannon and Glen Beck also signed on.
As of Wednesday, the list of signatories was still growing.
Netflix is “all in” on leveraging generative artificial intelligence on its streaming platform, according to the company, as AI continues to make its way into mainstream entertainment.
The comments came from Netflix’s earnings report Tuesday, which highlighted AI as a major focus for the world’s largest streaming service by subscriber count.
“For many years now, [machine learning] and AI have been powering our title recommendations as well as production and promotion technology,” Netflix said in a letter to shareholders.
It added that generative AI presents a “significant opportunity” across its streaming platform, including improving its recommendations, ads business, and movies and TV content.
“We’re empowering creators with a broad set of GenAI tools to help them achieve their visions and deliver even more impactful titles for members,” the company said.
Netflix provided recent examples of this, noting its recently distributed film Happy Gilmore 2 used generative AI tools to help de-age characters. Meanwhile, producers for the Netflix series Billionaires’ Bunker have used various generative AI tools during pre-production to explore wardrobe and set designs.
Concerns of AI replacement
Netflix’s comments come amid broader concerns in the entertainment and art world regarding the potential for AI to replace human workers and the technology’s use of human-made content.
Speaking during an earnings call, Netflix CEO Ted Sarandos seemingly addressed those issues, noting that AI can enhance the overall TV and movie experience, but “can’t automatically make you a great storyteller if you’re not.”
“We’re confident that AI is going to help us and help our creative partners tell stories better, faster and in new ways — we’re all in on that,” Sarandos said. He added: “We’re not worried about AI replacing creativity.”
However, many in the entertainment industry remain skeptical of AI and its growing presence in media.
An upstart production studio called Particle6 recently faced massive backlash for its plan to create, design, manage and monetize AI-generated actors and talent, including from the media union SAG-AFTRA.
SAG-AFTRA previously led a significant actors’ strike in July 2023, amid a broader series of Hollywood labor disputes that saw concerns about the use of artificial intelligence brought to the forefront.
The strike lasted over 100 days before a tentative agreement was reached between SAG-AFTRA and the Alliance of Motion Picture and Television Producers, which included the establishment of contractual AI protections for film and TV performers for the first time.
To further encourage the responsible use of such AI tools, Netflix recently released a new AI-focused production guidance aimed at its creators.
It’s been more than 17 years since the modern smartphone era began with the launch of the iPhone, and tech companies have been obsessed with trying to disrupt it ever since.
The most common approach is mixed reality XR headsets: computerized goggles that put all of your apps and other digital content right in front of your face.
Samsung is the latest to take on the category with the Galaxy XR. Samsung will start selling it on Tuesday night for $1,800, about half the price of Apple‘s Vision Pro.
Early adopters will also get a suite of digital freebies, like free access to the paid version of Google‘s Gemini AI assistant and YouTube Premium for a year.
The headset was made in partnership with Google for the software and Qualcomm, which makes the chip powering the Galaxy XR.
Samsung Galaxy XR Headset
Courtesy: Samsung
Samsung’s Galaxy XR lets you enter an immersive, virtual computing experience where your apps and other content appear to float in your field of view. External cameras project the real world onto the tiny 4K displays in the headset, meaning you can walk around a room while wearing the Galaxy XR without bumping into anything.
You control everything with hand gestures, your voice or a mix of both.
As for the headset itself, you’d be forgiven for thinking you were looking at an Apple Vision Pro.
From the curved glass on the front of the Galaxy XR, to the metal trim and the external battery pack that dangles from the headset by a cable, it’s almost as if Samsung and Google spent the last two years reverse-engineering the Vision Pro.
Read more CNBC tech news
And in those two years, we’ve learned a lot about these computers for your face.
They’re niche, expensive products that most people don’t want to use, and there’s still no killer app or enough immersive content to keep you consistently entertained and justify the $2,000 or more you’re spending.
The promise of the metaverse evaporated as soon as ChatGPT came on the scene in late 2022 and the tech industry shifted its focus to artificial intelligence. Even Mark Zuckerberg, who changed his company’s name to “Meta” in 2022, barely talks about the metaverse anymore.
But Samsung has a different pitch for the Galaxy XR.
It may come with all the drawbacks of Apple or Meta’s headsets, but Samsung and Google say the Galaxy XR is really a stepping stone to AI glasses currently in development with eyewear brands Warby Parker and Gentle Monster.
Those devices will rely on Google’s AI assistant Gemini, which is also central to the experience on the Galaxy XR.
Google showed an early demo of those glasses at its annual I/O event in May, but there are no details on when such a device will launch. Google also has a long track record of announcing products at I/O that never actually go on sale to the public.
Remember Google Glass? What about the Nexus Q?
Samsung Galaxy XR Headset
Courtesy: Samsung
But Google and Samsung are acting like things are different this time, and that’s why Gemini is such a big part of the Galaxy XR.
While you can control everything in the headset using hand gestures and Samsung even mimicked the same gestures Apple came up with for the Vision Pro.
The Gemini controls were, however, the most impressive portion of the Galaxy XR demo Samsung had in New York last week.
I could use Gemini to organize floating windows of apps in my virtual workspace, ask it questions about landmarks I was looking at in Google Maps, or prompt it to generate a goofy video using Veo, Google’s AI video generator that’s like OpenAI’s Sora.
Overall, the Gemini demo was flawless. It understood everything I said, even in a noisy conference room, and executed my commands quickly.
It wasn’t exactly revolutionary, but it was a step beyond the capabilities of the Vision Pro, which doesn’t have generative AI features at all.
I could see how Gemini will evolve to fit into a more comfortable and stylish form factor, like Meta has with its Ray-Ban AI glasses. And I can now understand why Apple has reportedly changed its plans from developing a new version of the Vision Pro in favor of AI glasses that are expected to launch in 2026.
Samsung Galaxy XR Headset
Courtesy: Samsung
Now for the major downside.
Gemini runs in the cloud, meaning you must give it permission to “see” everything you do on your headset by transmitting it over the internet to Google’s servers. Google doesn’t have the same private cloud technology Apple has for its AI systems, so you risk sharing a lot of personal information about what you do on your device with the company. That’s going to be a nonstarter for many people.
Even though you can see the promise of AI-powered glasses, they’re even more of a niche product than immersive headsets, much smaller than smartphones, laptops or tablets.
Meta, the market leader for the category, only sold 2 million pairs of its Ray-Ban glasses in the first two years. By comparison, Apple sells well over 200 million iPhones a year. We’re a long way off from glasses becoming a must-have accessory to your phone like wireless earbuds or a smartwatch.
And as impressive as Gemini is so far, a future where the smartphone is replaced by an AI device like glasses has never felt further away.