Google CEO Sundar Pichai speaks at the Google I/O developer conference.
Andrej Sokolow | Picture Alliance | Getty Images
Google on Tuesday hosted its annual I/O developer conference, and rolled out a range of artificial intelligence products, from new search and chat features to AI hardware for cloud customers. The announcements underscore the company’s focus on AI as it fends off competitors, such as OpenAI.
Many of the features or tools Google unveiled are only in a testing phase or limited to developers, but they give an idea of how the tech giant is thinking about AI and where it’s investing. Google makes money from AI by charging developers who use its models and from customers who pay for Gemini Advanced, its competitor to ChatGPT, which costs $19.99 per month and can help users summarize PDFs, Google Docs and more.
Tuesday’s announcements follow similar events held by its AI competitors. Earlier this month, Amazon-backed Anthropic announced its first-ever enterprise offering and a free iPhone app. Meanwhile, OpenAIon Monday launched a new AI model and desktop version of ChatGPT, along with a new user interface.
Here’s what Google announced.
Gemini AI updates
Google introduced updates to Gemini 1.5 Pro, its AI model that will soon be able to handle even more data — for example, the tool can summarize 1,500 pages of text uploaded by a user.
There’s also a new Gemini 1.5 Flash AI model, which the company said is more cost-effective and designed for smaller tasks like quickly summarizing conversations, captioning images and videos and pulling data from large documents.
Google CEO Sundar Pichai highlighted improvements to Gemini’s translations, adding that it will be available to all developers worldwide in 35 languages. Within Gmail, Gemini 1.5 Pro will analyze attached PDFs and videos, giving summaries and more, Pichai said. That means that if you missed a long email thread on vacation, Gemini will be able to summarize it along with any attachments.
The new Gemini updates are also helpful for searching Gmail. One example the company gave: If you’ve been comparing prices from different contractors to fix your roof and are looking for a summary to help you decide who to pick, Gemini could return three quotes along with the anticipated start dates offered in the different email threads.
Google said Gemini will eventually replace Google Assistant on Android phones, suggesting it’s going to be a more powerful competitor to Apple’s Siri on iPhone.
Google Veo, Imagen 3 and Audio Overviews
Google announced “Veo,” its latest model for generating high-definition video, and Imagen 3, its highest quality text-to-image model, which promises lifelike images and “fewer distracting visual artifacts than our prior models.”
The tools will be available for select creators on Monday and will come to Vertex AI, Google’s machine learning platform that lets developers train and deploy AI applications.
The company also showcased “Audio Overviews,” the ability to generate audio discussions based on text input. For instance, if a user uploads a lesson plan, the chatbot can speak a summary of it. Or, if you ask for an example of a science problem in real life, it can do so through interactive audio.
Separately, the company also showcased “AI Sandbox,” a range of generative AI tools for creating music and sounds from scratch, based on user prompts.
Generative AI tools such as chatbots and image creators continue to have issues with accuracy, however.
Google search boss Prabhakar Raghavan told employees last month that competitors “may have a new gizmo out there that people like to play with, but they still come to Google to verify what they see there because it is the trusted source, and it becomes more critical in this era of generative AI.”
Earlier this year, Google introduced the Gemini-powered image generator. Users discovered historical inaccuracies that went viral online, and the company pulled the feature, saying it would relaunch it in the coming weeks. The feature has still not been re-released.
New search features
The tech giant is launching “AI Overviews” in Google Search on Monday in the U.S. AI Overviews show a quick summary of answers to the most complex search questions, according to Liz Reid, head of Google Search. For example, if a user searches for the best way to clean leather boots, the results page may display an “AI Overview” at the top with a multi-step cleaning process, gleaned from information it synthesized from around the web.
The company said it plans to introduce assistant-like planning capabilities directly within search. It explained that users will be able to search for something like, “‘Create a 3-day meal plan for a group that’s easy to prepare,'” and you’ll get a starting point with a wide range of recipes from across the web.
As far as its progress to offer “multimodality,” or integrating more images and video within generative AI tools, Google said it will begin testing the ability for users to ask questions through video, such as filming a problem with a product they own, uploading it and asking the search engine to figure out the problem. In one example, Google showed someone filming a broken record player while asking why it wasn’t working. Google Search found the model of the record player and suggested that it could be malfunctioning because it wasn’t properly balanced.
Another new feature being tested is called “AI Teammate,” which will integrate into a user’s Google Workspace. It can build a searchable collection of work from messages and email threads with more PDFs and documents. For instance, a founder-to-be could ask the AI Teammate, “Are we ready for launch?” and the assistant will provide an analysis and summary based on the information it can access in Gmail, Google Docs and other Workspace apps.
Project Astra
Project Astra is Google’s latest advancement toward its AI assistant that’s being built by Google’s DeepMind AI unit. It’s just a prototype for now, but you can think of it as Google’s aim to develop its own version of J.A.R.V.I.S., Tony Stark’s all-knowing AI assistant from the Marvel Universe.
In the demo video presented at Google I/O, the assistant — through video and audio, rather than a chatbot interface — was able to help the user remember where they left their glasses, review code and answer questions about what a certain part of a speaker is called, when that speaker was shown on video.
Google said a truly useful chatbot needs to let users “talk to it naturally and without lag or delay.” The conversation in the demo video happened in real time, without lags. The demo followed OpenAI’s Monday showcase of a similar audio back-and-forth conversation with ChatGPT.
DeepMind CEO Demis Hassabis said onstage that “getting response time down to something conversational is a difficult engineering challenge.”
Pichai said he expects Project Astra to launch in Gemini later this year.
AI hardware
Google also announced Trillium, its sixth-generation TPU, or tensor processing unit — a piece of hardware integral to running complex AI operations — which is to be available to cloud customers in late 2024.
The TPUs aren’t meant to compete with other chips, like Nvidia’s graphics processing units. Pichai noted during I/O, for example, that Google Cloud will begin offering Nvidia’s Blackwell GPUs in early 2025.
Nvidia said in March that Google will be using the Blackwell platform for “various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances,” and that access to Nvidia’s systems will help Google offer large-scale tools for enterprise developers building large language models.
In his speech, Pichai highlighted Google’s “longstanding partnership with Nvidia.” The companies have been working together for more than a decade, and Pichai has said in the past that he expects them to still be doing so a decade from now.
Shoppers looking for gadgets and gizmos powered by generative AI technology to gift to their loved ones won’t have many options to choose from this holiday season.
Generative artificial intelligence has taken Silicon Valley by storm since the launch of OpenAI’s ChatGPT chatbot in November 2022. Although startups have raised billions to build new GenAI tools and tech giants have bought millions of Nvidia processors to train AI models, few companies have delivered new hardware built with the new-age tech as its focal point.
There was a lot of optimism over the potential of GenAI gadgets at the CES trade show in January, said Paul Gagnon, vice president for analyst firm Circana. In particular, products from high-profile startups such as Humane and Rabbit, which were marketed as being able to translate, answer questions, take voice memos and set alarms, were drawing buzz, Gagnon said.
But many of these new GenAI devices didn’t work as well as people expected, with reviewers saying that the gadgets were too slow and too prone to failure.
“As we’ve gone through the year, and those kinds of promises — which I’ll be honest, were pretty nebulous to start with — there’s been a bit of a struggle with communicating that to consumers,” Gagnon said.
A key reason GenAI hardware hasn’t had a breakthrough is that current devices are “compute restrained,” meaning they require more powerful silicon chips and related components to perform better, particularly when compared with smartphones, said Ben Bajarin, CEO of Creative Strategies, a market research firm.
Additionally, consumers may find current GenAI devices too expensive, and they may be confused about what the devices can actually do, he said.
GenAI devices, such as the Ray-Ban Meta smart glasses, also typically require a smartphone connection for an accompanying app as well as strong internet access, because a bad internet connection can lead to performance delays that frustrate people, Bajarin said.
While companies such as Microsoft, Apple, Intel, Dell and Lenovo have also heavily marketed new lineups of personal computers capable of performing GenAI tasks, consumers have yet to perk up to the sales pitch, said Ryan Reith, an IDC program vice president for mobile devices.
“I don’t think that there’s actually a need for consumers to go out and get one of these more expensive PCs,” Reith said, noting that people may be confused about why they need beefier computers when they can already access tools such as ChatGPT through their current PCs.
The reality is that while GenAI has captivated Silicon Valley, it’s still “inning zero” in regard to widespread adoption, Bajarin said.
“Even though I can rattle off all these productivity stats of how people are using AI today, it’s a very small number of people,” he said. “This is not mainstream.”
It may not be until 2025 that consumers see a “big explosion” in GenAI computers, smartphones and new gadgets, said Steve Koenig, vice president of research at the Consumer Technology Association, which produces CES.
Despite Silicon Valley not having a breakout year for GenAI hardware, here are a few GenAI devices early adopters can buy.
Ray-Ban Meta glasses
Meta released the second generation of its Ray-Ban smart glasses in 2023, but the company began rolling out GenAI features for the device earlier this year and announced several new AI capabilities at its Connect event in September.
The glasses don’t offer users augmented reality capabilities, but people can use the device to take photos, listen to music and ask the Meta AI digital assistant for information about the things within their field of view.
With the help of the device’s mics and camera, for instance, users can ask the Meta AI digital assistant to recommend a recipe when they walk through a grocery aisle and scan the shelves, the company said in a blog post.
Meta, which makes Facebook and Instagram, is selling certain versions of the glasses for 20% off through Dec. 2. This means that a pair of the Ray-Ban Meta Skyler style of glasses will cost $239.20 instead of $299 if bought online.
Rabbit r1
The Rabbit r1 is a $200 gizmo that looks like an orange, miniaturized tablet with a playful aesthetic that’s more Nintendo Switch than Apple iPad.
Outfitted with a camera and dual mics, the r1 can record audio clips and set timers or perform more advanced tasks, such as helping users recall details from past conversations, search results and voice recordings. After the device began shipping in March, reviewers criticized the r1 for stumbling at various tasks and failing to outshine smartphones that can do many of the same functions.
The startup “has used that feedback to rapidly make very significant improvements to the user experience” and has released scores of updates to improve, Rabbit CEO Jesse Lyu told CNBC in a statement.
Despite the harsh reviews, Rabbit has “sold more than 100,000 r1 devices when we originally expected to sell only 3,000” and the company is “seeing a return rate of less than 5%, which is very solid for a first-generation product,” Lyu said.
Rabbit is currently running a deal that gives shoppers free shipping, or $15 off, if they order an r1 by Dec. 4.
Bee
After raising $7 million in funding in July, the startup Bee AI will begin selling its GenAI device, the Bee, on Friday.
The Bee looks like an internet-connected smartwatch and functions like an advanced digital assistant. Its dual mics allow it to listen and analyze people’s voice memos and conversations to provide summaries and to-do lists, Bee AI CEO Maria de Lourdes Zollo told CNBC.
The Bee can also be integrated with health-care tools and people’s Google and Gmail accounts to help generate personalized summaries and action items, Zollo said. Although the startup offers a Bee app for the Apple Watch for people who don’t want to buy another hardware device, she said the core Bee device is better at understanding voices in loud environments.
Shoppers can buy the Bee for $49.99 and get its basic tasks, but they will have to pay a $15-per-month subscription for more features such as “better memory or better capabilities,” Zollo said.
For Black Friday, Bee is offering shoppers three free months of the device’s subscription service. The device should ship in time for Christmas, Zollo said.
Just Eat Takeaway said it was delisting its shares from the London Stock Exchange due to the “low liquidity and trading volumes” of its shares on the exchange.
Mike Kemp | In Pictures | Getty Images
Just Eat Takeaway will delist from the London Stock Exchange next month, in a blow to the U.K.’s ambitions to attract more high-growth tech firms to its stock market.
After completing a review of optimal listing venues, the Anglo-Dutch food delivery firm said Wednesday that it intends to delist from London’s stock exchange, making Amsterdam Just Eat Takeaway.com’s sole trading venue.
Explaining its decision, Just Eat Takeaway said it was delisting its shares from the LSE in a bid to “reduce the administrative burden, complexity and costs associated with the disclosure and regulatory requirements of maintaining the LSE listing, and in the context of low liquidity and trading volumes.”
Just Eat Takeaway shares slipped 1.5% following the delisting announcement.
It has requested that the LSE and the Financial Conduct Authority, the U.K.’s markets watchdog, cancel its listing, so that it can remain primarily listed on the Amsterdam exchange.
The delisting will become effective from 8 a.m London time on Dec. 27, while Dec. 24 will mark the last date of trading of Just Eat Takeaway’s shares on the LSE.
Earlier this month, Just Eat Takeaway.com said it would sell its GrubHub arm to New York-based online takeout startup Wonder for $650 million — a huge discount compared to the $7.3 billion the firm paid for the U.S. food delivery app.
Reddit is ramping up efforts to attract more users outside of the U.S., putting countries like India and Brazil in focus as it looks to unlock new advertising opportunities, a top company executive told CNBC.
In a wide-ranging interview, Jen Wong, chief operating officer of Reddit, said other platforms have 80% to 90% of users outside of the U.S. while about half of her company’s current users are based internationally.
“So that points to a lot of our future user growth opportunity definitely outside of the U.S. and local language,” Wong told CNBC. “The opportunity, the way I think about it, is every language is an opportunity for another Reddit.”
Reddit has historically been an English-language platform, but the company is looking to expand its international reach with the help of artificial intelligence translations. This year, Reddit launched a feature that automatically translates its site into different languages.
Wong said that around 20 to 30 languages could be available by the end of the year.
India opportunity
Among the company’s fastest-growing markets in terms of users is the U.K., the Philippines, India and Brazil.
“India’s growing really rapidly,” Wong said. “We see a big opportunity in India.”
The Reddit COO said that India has a large English-speaking internet population, and there are lots of engaged users around topics like cricket and the Bollywood movie industry.
Wong also said Reddit has been meeting with “mods” — or moderators, who oversee content on communities on the site.
Advertising opportunity
Growth in markets like India can propel Reddit to boost ad revenue, its main source of income.
International markets account for just over 17% of Reddit’s revenue currently, according to the company’s third-quarter results, despite around 50% of its users being located outside the U.S.
Wong said that Reddit first attempts cross-border advertising for international markets, such as when a European brand is looking to advertise in the U.S. Then, when Reddit hits about 10% of a country’s internet population in a country, there is an opportunity to build teams focused on local advertising — like an Indian brand advertising to Indian users.
This has not yet happened in many markets, but Reddit is keeping an eye on many of its fastest growing countries, Wong said.
New search tools
Reddit users will know that it’s not always the easiest site to find what you’re looking for — a drawback that the company is now looking to change with new search tools.
During Reddit’s third-quarter earnings call last month, CEO Steve Huffman called search on the platform a “focused investment” in 2025.
Wong expanded that the company is thinking of its search feature as a way of helping users to navigate around the site to find similar topics or posts that they may have otherwise missed.
“You land on a post and but it’s almost like a dead end. But there are a lot of posts, often like that post, or there are other posts like that post in other communities. And so giving you a total view of what that looks like is a really interesting opportunity,” Wong said.
“Guiding you through Reddit as you follow that line of thinking, is how we think of the opportunity.”
Wong declined to say more except, “We’re testing a lot of things.”