Google CEO Sundar Pichai speaks at the Google I/O developer conference.
Andrej Sokolow | Picture Alliance | Getty Images
Google on Tuesday hosted its annual I/O developer conference, and rolled out a range of artificial intelligence products, from new search and chat features to AI hardware for cloud customers. The announcements underscore the company’s focus on AI as it fends off competitors, such as OpenAI.
Many of the features or tools Google unveiled are only in a testing phase or limited to developers, but they give an idea of how the tech giant is thinking about AI and where it’s investing. Google makes money from AI by charging developers who use its models and from customers who pay for Gemini Advanced, its competitor to ChatGPT, which costs $19.99 per month and can help users summarize PDFs, Google Docs and more.
Tuesday’s announcements follow similar events held by its AI competitors. Earlier this month, Amazon-backed Anthropic announced its first-ever enterprise offering and a free iPhone app. Meanwhile, OpenAIon Monday launched a new AI model and desktop version of ChatGPT, along with a new user interface.
Here’s what Google announced.
Gemini AI updates
Google introduced updates to Gemini 1.5 Pro, its AI model that will soon be able to handle even more data — for example, the tool can summarize 1,500 pages of text uploaded by a user.
There’s also a new Gemini 1.5 Flash AI model, which the company said is more cost-effective and designed for smaller tasks like quickly summarizing conversations, captioning images and videos and pulling data from large documents.
Google CEO Sundar Pichai highlighted improvements to Gemini’s translations, adding that it will be available to all developers worldwide in 35 languages. Within Gmail, Gemini 1.5 Pro will analyze attached PDFs and videos, giving summaries and more, Pichai said. That means that if you missed a long email thread on vacation, Gemini will be able to summarize it along with any attachments.
The new Gemini updates are also helpful for searching Gmail. One example the company gave: If you’ve been comparing prices from different contractors to fix your roof and are looking for a summary to help you decide who to pick, Gemini could return three quotes along with the anticipated start dates offered in the different email threads.
Google said Gemini will eventually replace Google Assistant on Android phones, suggesting it’s going to be a more powerful competitor to Apple’s Siri on iPhone.
Google Veo, Imagen 3 and Audio Overviews
Google announced “Veo,” its latest model for generating high-definition video, and Imagen 3, its highest quality text-to-image model, which promises lifelike images and “fewer distracting visual artifacts than our prior models.”
The tools will be available for select creators on Monday and will come to Vertex AI, Google’s machine learning platform that lets developers train and deploy AI applications.
The company also showcased “Audio Overviews,” the ability to generate audio discussions based on text input. For instance, if a user uploads a lesson plan, the chatbot can speak a summary of it. Or, if you ask for an example of a science problem in real life, it can do so through interactive audio.
Separately, the company also showcased “AI Sandbox,” a range of generative AI tools for creating music and sounds from scratch, based on user prompts.
Generative AI tools such as chatbots and image creators continue to have issues with accuracy, however.
Google search boss Prabhakar Raghavan told employees last month that competitors “may have a new gizmo out there that people like to play with, but they still come to Google to verify what they see there because it is the trusted source, and it becomes more critical in this era of generative AI.”
Earlier this year, Google introduced the Gemini-powered image generator. Users discovered historical inaccuracies that went viral online, and the company pulled the feature, saying it would relaunch it in the coming weeks. The feature has still not been re-released.
New search features
The tech giant is launching “AI Overviews” in Google Search on Monday in the U.S. AI Overviews show a quick summary of answers to the most complex search questions, according to Liz Reid, head of Google Search. For example, if a user searches for the best way to clean leather boots, the results page may display an “AI Overview” at the top with a multi-step cleaning process, gleaned from information it synthesized from around the web.
The company said it plans to introduce assistant-like planning capabilities directly within search. It explained that users will be able to search for something like, “‘Create a 3-day meal plan for a group that’s easy to prepare,'” and you’ll get a starting point with a wide range of recipes from across the web.
As far as its progress to offer “multimodality,” or integrating more images and video within generative AI tools, Google said it will begin testing the ability for users to ask questions through video, such as filming a problem with a product they own, uploading it and asking the search engine to figure out the problem. In one example, Google showed someone filming a broken record player while asking why it wasn’t working. Google Search found the model of the record player and suggested that it could be malfunctioning because it wasn’t properly balanced.
Another new feature being tested is called “AI Teammate,” which will integrate into a user’s Google Workspace. It can build a searchable collection of work from messages and email threads with more PDFs and documents. For instance, a founder-to-be could ask the AI Teammate, “Are we ready for launch?” and the assistant will provide an analysis and summary based on the information it can access in Gmail, Google Docs and other Workspace apps.
Project Astra
Project Astra is Google’s latest advancement toward its AI assistant that’s being built by Google’s DeepMind AI unit. It’s just a prototype for now, but you can think of it as Google’s aim to develop its own version of J.A.R.V.I.S., Tony Stark’s all-knowing AI assistant from the Marvel Universe.
In the demo video presented at Google I/O, the assistant — through video and audio, rather than a chatbot interface — was able to help the user remember where they left their glasses, review code and answer questions about what a certain part of a speaker is called, when that speaker was shown on video.
Google said a truly useful chatbot needs to let users “talk to it naturally and without lag or delay.” The conversation in the demo video happened in real time, without lags. The demo followed OpenAI’s Monday showcase of a similar audio back-and-forth conversation with ChatGPT.
DeepMind CEO Demis Hassabis said onstage that “getting response time down to something conversational is a difficult engineering challenge.”
Pichai said he expects Project Astra to launch in Gemini later this year.
AI hardware
Google also announced Trillium, its sixth-generation TPU, or tensor processing unit — a piece of hardware integral to running complex AI operations — which is to be available to cloud customers in late 2024.
The TPUs aren’t meant to compete with other chips, like Nvidia’s graphics processing units. Pichai noted during I/O, for example, that Google Cloud will begin offering Nvidia’s Blackwell GPUs in early 2025.
Nvidia said in March that Google will be using the Blackwell platform for “various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances,” and that access to Nvidia’s systems will help Google offer large-scale tools for enterprise developers building large language models.
In his speech, Pichai highlighted Google’s “longstanding partnership with Nvidia.” The companies have been working together for more than a decade, and Pichai has said in the past that he expects them to still be doing so a decade from now.
OpenAI has signed a deal to buy $38 billion worth of capacity from Amazon Web Services, its first contract with the leader in cloud infrastructure and the latest sign that the $500 billion artificial intelligence startup is no longer reliant on Microsoft.
Under the agreement announced on Monday, OpenAI will immediately begin running workloads on AWS infrastructure, tapping hundreds of thousands of Nvidia’s graphics processing units (GPUs) in the U.S., with plans to expand capacity in the coming years.
Amazon stock climbed about 5% following the news.
The first phase of the deal will use existing AWS data centers, and Amazon will eventually build out additional infrastructure for OpenAI.
“It’s completely separate capacity that we’re putting down,” said Dave Brown, vice president of compute and machine learning services at AWS, in an interview. “Some of that capacity is already available, and OpenAI is making use of that.”
Read more CNBC Amazon coverage
OpenAI has been on a dealmaking spree of late, announcing roughly $1.4 trillion worth of buildout agreements with companies including Nvidia, Broadcom, Oracle and Google — prompting skeptics to warn of an AI bubble and question whether the country has the power and resources needed to turn the ambitious promises into reality.
Until this year, OpenAI had an exclusive cloud agreement with Microsoft, which first backed the company in 2019 and has invested a total of $13 billion. In January, Microsoft said it would no longer be the exclusive cloud provider for OpenAI, and was moving to an arrangement where it would have right of first refusal for new requests.
Last week, Microsoft’s preferential status expired under its newly negotiated commercial terms with OpenAI, freeing the ChatGPT creator to partner more widely with the other hyperscalers. Even before that, OpenAI forged cloud deals with Oracle and Google, but AWS is by far the market leader.
“Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in Monday’s release. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
OpenAI will still be spending heavily with Microsoft, reaffirming that commitment by saying last week that it will purchase an incremental $250 billion of Azure services.
For Amazon, the pact is significant both in the size and scale of the deal itself and because the cloud giant has close ties to OpenAI rival Anthropic. Amazon has invested billions of dollars in Anthropic, and is currently constructing an $11 billion data center campus in New Carlisle, Indiana, that’s designed exclusively for Anthropic workloads.
“The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads,” AWS CEO Matt Garman said in the release.
In its earnings report last week, Amazon reported more than 20% year-over-year revenue growth at AWS, beating analyst estimates. But growth was faster at Microsoft and Google, which reported cloud expansion of 40% and 34%, respectively.
Starting on Nvidia
The current agreement with OpenAI is explicitly for use of Nvidia chips, including two popular Blackwell models, but there’s potential to incorporate additional silicon down the road. Amazon’s custom-built Trainium chip is being used by Anthropic in the new facility.
“We like Trainium because we’re able to give customers something that gives them better price performance and honestly gives them choice,” Brown said, adding that he can’t provide any details on “anything we’ve done with OpenAI on Trainium at this point.”
The infrastructure will support both inference — such as powering ChatGPT’s real-time responses — and training of next-generation frontier models. OpenAI can expand with AWS as needed over the next seven years, but no plans beyond 2026 have been finalized.
OpenAI CEO Sam Altman (L) shakes hands with Microsoft Chief Technology Officer and Executive VP of Artificial Intelligence Kevin Scott during the Microsoft Build conference at the Seattle Convention Center Summit Building in Seattle, Washington, U.S., on May 21, 2024.
Jason Redmond | Afp | Getty Images
OpenAI’s foundation models, including so-called open-weight options, are already available on Bedrock, AWS’s managed service for accessing leading AI systems.
Companies including Peloton, Thomson Reuters, Comscore, and Triomics use OpenAI models on AWS for tasks ranging from coding and mathematical problem solving to scientific analysis and agentic workflows.
Monday’s announcement establishes a more direct relationship.
“As part of this deal, OpenAI is a customer of AWS,” Brown said. “They’ve committed to buying compute capacity from us, and we’re charging OpenAI for that capacity. It’s very, very straightforward.”
For OpenAI, the most highly valued private AI company, the AWS agreement is another step in getting ready to eventually go public. By diversifying its cloud partners and locking in long-term capacity across providers, OpenAI is signaling both independence and operational maturity.
Altman acknowledged in a recent livestream that an IPO is “the most likely path” given OpenAI’s capital needs. CFO Sarah Friar has echoed that sentiment, framing the recent corporate restructuring as a necessary step toward going public.
MongoDB CEO Dev Ittycheria arrives at the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, on July 9, 2025.
David Paul Morris | Bloomberg | Getty Images
Database software maker MongoDB said on Monday that CEO Dev Ittycheria is stepping down from the top job after an 11-year run.
Chirantan “CJ” Desai, who has spent the past year as president of product and engineering at Cloudflare, is replacing Ittycheria, effective Nov. 10, MongoDB said. Ittycheria will remain on the company’s board.
“Earlier this year, I would say as part of our normal succession planning process, the board asked me about my long-term plans and whether I could commit for another five years as CEO,” Ittycheria told CNBC in an interview. “I thought long and hard about it, and I talked to my family, I talked to the board and ultimately realized I couldn’t make that kind of decision.”
Before joining MongoDB, Ittycheria was president of BMC, which bought his company BladeLogic for $854 million in 2008. As BladeLogic’s co-founder and CEO, Ittycheria took the company public in 2007. He’s also been an investor at venture firms OpenView and Greylock.
Ittycheria led MongoDB’s IPO in 2017, three years after taking the helm. The company won over individual software developers thanks to its database’s architecture that could store a variety of data in documents, challenging market incumbents like Oracle.
Under Ittycheria, the company prioritized cloud subscriptions, landed multi-year deals, partnered with rival cloud providers Amazon and Microsoft and expanded the software’s capabilities into generative artificial intelligence.
MongoDB’s stock closed on Friday at $359.82, representing a fifteenfold gain since the IPO and lifting the company’s market cap to almost $30 billion. MongoDB’s net loss in the July quarter narrowed to $47 million from $54.5 million a year earlier, while revenue rose 24% to $591 million.
Cloudflare said in a filing on Thursday that Desai would step down on Nov. 7, to become CEO “at another notable, publicly-traded company.” Desai previously served as operating chief at ServiceNow. He resigned in July 2024, after the software company found a policy violation with the hiring of the U.S. Army’s chief information officer. Previously Desai held leadership positions at EMC and Symantec.
“We talked to people close to ServiceNow, as well as other people who know CJ really well, and we felt very, very comfortable that CJ is the right person to lead MongoDB in this next era,” Ittycheria said.
Desai, whose first job out of college was at Oracle, said he will split his time between New York and the San Francisco area.
MongoDB also said it expects to exceed the high end of its guidance ranges for revenue and adjusted earnings per share in the fiscal third quarter. The top end of its range was 79 cents per share in earnings, and $592 million in revenue.
Desai said he’s “looking forward to grow MongoDB to $5 billion-plus in a durable, profitable way, in revenues, and most importantly, to be the gold standard for modern database technology, no matter what kind of workloads exist.” He did not offer a timeline for the revenue goal.
Executives will discuss the leadership change on a conference call with analysts at 10 a.m. ET.
Jensen Huang, CEO of Nvidia, speaks during the 2025 Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju, South Korea, October 31, 2025.
Kim Soo-hyeon | Reuters
Microsoft said Monday it has secured export licenses to ship Nvidia chips to the United Arab Emirates in a move that could accelerate the Gulf’s lofty AI ambitions.
The tech giant said it is the first company under U.S. President Donald Trump‘s administration to secure such licenses from the Commerce Department and that the approval, granted in September, was based on “updated and stringent technology safeguards.”
The licenses enable the firm to ship the equivalent of 60,400 additional A100 chips, involving tech darling Nvidia’s more advanced GB300 GPUs.
“While the chips are powerful and the numbers are large, more important is their positive impact across the UAE,” Microsoft said in a blog post. “We’re using these GPUs to provide access to advanced AI models from OpenAI, Anthropic, open-source providers, and Microsoft itself.”
Nvidia shares climbed 3% Monday. Microsoft stock rose slightly.
Azad Zangana, head of GCC macroeconomic analysis at Oxford Economics, said in a note that Nvidia’s chips are “crucial” for the UAE’s push to be a major global player in AI.
“Access to the world’s leading AI chips provides the hardware that will give developers the leading edge that is needed in an incredibly competitive global landscape,” Zangana wrote.
There is a “very important” relationship between the UAE and U.S. governments that has spanned multiple administrations, Microsoft President Brad Smith told CNBC’s Dan Murphy at the ADIPEC conference in Abu Dhabi.
“We’re very grateful to the Secretary of Commerce Howard Lutnick, and the work that he has championed to enable export licenses to be made available to us,” Smith said. “That builds as well on the relationships we had with Secretary [Marco] Rubio when he was in the Senate and Democrats as well. [It] takes two parties to govern, and we keep that in mind.”
Microsoft also announced it will be increasing its investment in UAE, bringing its total contribution to $15.2 billion by the end of this decade.
That includes a $1.5 billion equity investment in AI firm G42 and more than $5.5 billion in capital expenses for the expansion of Microsoft’s AI and cloud infrastructure projects in the region.
“We’re really investing in trust, and I think it’s that combination of technology, talent and trust that you’re seeing come together here in the UAE, around AI, around technology, but really the future of the whole economy,” Smith said.