Google CEO Sundar Pichai speaks in conversation with Emily Chang during the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and runs through November 17.
Google is launching what it considers its largest and most capable artificial intelligence model Wednesday as pressure mounts on the company to answer how it’ll monetize AI.
The large language model Gemini will include a suite of three different sizes: Gemini Ultra, its largest, most capable category; Gemini Pro, which scales across a wide range of tasks; and Gemini Nano, which it will use for specific tasks and mobile devices.
For now, the company is planning to license Gemini to customers through Google Cloud for them to use in their own applications. Starting Dec. 13, developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers will also be able to build with Gemini Nano. Gemini will also be used to power Google products like its Bard chatbot and Search Generative Experience, which tries to answer search queries with conversational-style text (SGE is not widely available yet).
Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities, the company said in a blog post Wednesday. It can supposedly understand nuance and reasoning in complex subjects.
Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023.
David Paul Morris | Bloomberg | Getty Images
“Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research,” wrote CEO Sundar Pichai in a blog post Wednesday. “It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.”
Starting today, Google’s chatbot Bard will use Gemini Pro to help with advanced reasoning, planning, understanding and other capabilities. Early next year, it will launch “Bard Advanced,” which will use Gemini Ultra, executives said on a call with reporters Tuesday. It represents the biggest update to Bard, its ChatGPT-like chatbot.
The update comes eight months after the search giant first launched Bard and one year after OpenAI launched ChatGPT on GPT-3.5. In March of this year, the Sam Altman-led startup launched GPT-4. Executives said Tuesday that Gemini Pro outperformed GPT-3.5 but dodged questions about how it stacked up against GPT-4.
When asked if Google has plans to charge for access to “Bard Advanced,” Google’s general manager for Bard, Sissie Hsiao, said it is focused on creating a good experience and doesn’t have any monetization details yet.
When asked on a press briefing if Gemini has any novel capabilities compared with current generation LLMs, Eli Collins, vice president of product at Google DeepMind, answered, “I suspect it does” but that it’s still working to understand Gemini Ultra’s novel capabilities.
Google reportedly postponed the launch of Gemini because it wasn’t ready, bringing back memories of the company’s rocky rollout of its AI tools at the beginning of the year.
Multiple reporters asked about the delay, to which Collins answered that testing the more advanced models take longer. Collins said Gemini is the most highly tested AI model that the company’s built and that it has “the most comprehensive safety evaluations” of any Google model.
Collins said that despite being its largest model, Gemini Ultra is significantly cheaper to serve. “It’s not just more capable, it’s more efficient,” he said. “We still require significant compute to train Gemini but we’re getting much more efficient in terms of our ability to train these models.”
Collins said the company will release a technical white paper with more details of the model on Wednesday but said it won’t be releasing the perimeter count. Earlier this year, CNBC found Google’s PaLM 2 large language model, its latest AI model at the time, used nearly five times the amount of text data for training as its predecessor LLM.
Also on Wednesday, Google introduced its next-generation tensor processing unit for training AI models. The TPU v5p chip, which Salesforce and startup Lightricks have begun using, offers better performance for the price than the TPU v4 announced in 2021, Google said. But the company didn’t provide information on performance compared with market leader Nvidia.
The chip announcement comes weeks after cloud rivals Amazon and Microsoft showed off custom silicon targeting AI.
During Google’s third-quarter earnings conference call in October, investors asked executives more questions about how it’s going to turn AI into actual profit.
In August, Google launched an “early experiment” called Search Generative Experience, or SGE, which lets users see what a generative AI experience would look like when using the search engine — search is still a major profit center for the company. The result is more conversational, reflecting the age of chatbots. However, it is still considered an experiment and has yet to launch to the general public.
Investors have been asking for a timeline for SGE since May, when the company first announced the experiment at its annual developer conference Google I/O. The Gemini announcement Wednesday hardly mentioned SGE and executives were vague about its plans to launch to the general public, saying that Gemini would be incorporated into it “in the next year.”
“This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company,” Pichai said in Wednesday’s blog post. “I’m genuinely excited for what’s ahead, and for the opportunities Gemini will unlock for people everywhere.”
Hidden among the majestic canyons of the Utah desert, about 7 miles from the nearest town, is a small research facility meant to prepare humans for life on Mars.
The Mars Society, a nonprofit organization that runs the Mars Desert Research Station, or MDRS, invited CNBC to shadow one of its analog crews on a recent mission.
“MDRS is the best analog astronaut environment,” said Urban Koi, who served as health and safety officer for Crew 315. “The terrain is extremely similar to the Mars terrain and the protocols, research, science and engineering that occurs here is very similar to what we would do if we were to travel to Mars.”
SpaceX CEO and Mars advocate Elon Musk has said his company can get humans to Mars as early as 2029.
The 5-person Crew 315 spent two weeks living at the research station following the same procedures that they would on Mars.
David Laude, who served as the crew’s commander, described a typical day.
“So we all gather around by 7 a.m. around a common table in the upper deck and we have breakfast,” he said. “Around 8:00 we have our first meeting of the day where we plan out the day. And then in the morning, we usually have an EVA of two or three people and usually another one in the afternoon.”
An EVA refers to extravehicular activity. In NASA speak, EVAs refer to spacewalks, when astronauts leave the pressurized space station and must wear spacesuits to survive in space.
“I think the most challenging thing about these analog missions is just getting into a rhythm. … Although here the risk is lower, on Mars performing those daily tasks are what keeps us alive,” said Michael Andrews, the engineer for Crew 315.
Formula One F1 – United States Grand Prix – Circuit of the Americas, Austin, Texas, U.S. – October 23, 2022 Tim Cook waves the chequered flag to the race winner Red Bull’s Max Verstappen
Mike Segar | Reuters
Apple had two major launches last month. They couldn’t have been more different.
First, Apple revealed some of the artificial intelligence advancements it had been working on in the past year when it released developer versions of its operating systems to muted applause at its annual developer’s conference, WWDC. Then, at the end of the month, Apple hit the red carpet as its first true blockbuster movie, “F1,” debuted to over $155 million — and glowing reviews — in its first weekend.
While “F1” was a victory lap for Apple, highlighting the strength of its long-term outlook, the growth of its services business and its ability to tap into culture, Wall Street’s reaction to the company’s AI announcements at WWDC suggest there’s some trouble underneath the hood.
“F1” showed Apple at its best — in particular, its ability to invest in new, long-term projects. When Apple TV+ launched in 2019, it had only a handful of original shows and one movie, a film festival darling called “Hala” that didn’t even share its box office revenue.
Despite Apple TV+being written off as a costly side-project, Apple stuck with its plan over the years, expanding its staff and operation in Culver City, California. That allowed the company to build up Hollywood connections, especially for TV shows, and build an entertainment track record. Now, an Apple Original can lead the box office on a summer weekend, the prime season for blockbuster films.
The success of “F1” also highlights Apple’s significant marketing machine and ability to get big-name talent to appear with its leadership. Apple pulled out all the stops to market the movie, including using its Wallet app to send a push notification with a discount for tickets to the film. To promote “F1,” Cook appeared with movie star Brad Pitt at an Apple store in New York and posted a video with actual F1 racer Lewis Hamilton, who was one of the film’s producers.
(L-R) Brad Pitt, Lewis Hamilton, Tim Cook, and Damson Idris attend the World Premiere of “F1: The Movie” in Times Square on June 16, 2025 in New York City.
Jamie Mccarthy | Getty Images Entertainment | Getty Images
Although Apple services chief Eddy Cue said in a recent interview that Apple needs the its film business to be profitable to “continue to do great things,” “F1” isn’t just about the bottom line for the company.
Apple’s Hollywood productions are perhaps the most prominent face of the company’s services business, a profit engine that has been an investor favorite since the iPhone maker started highlighting the division in 2016.
Films will only ever be a small fraction of the services unit, which also includes payments, iCloud subscriptions, magazine bundles, Apple Music, game bundles, warranties, fees related to digital payments and ad sales. Plus, even the biggest box office smashes would be small on Apple’s scale — the company does over $1 billion in sales on average every day.
But movies are the only services component that can get celebrities like Pitt or George Clooney to appear next to an Apple logo — and the success of “F1” means that Apple could do more big popcorn films in the future.
“Nothing breeds success or inspires future investment like a current success,” said Comscore senior media analyst Paul Dergarabedian.
But if “F1” is a sign that Apple’s services business is in full throttle, the company’s AI struggles are a “check engine” light that won’t turn off.
Replacing Siri’s engine
At WWDC last month, Wall Street was eager to hear about the company’s plans for Apple Intelligence, its suite of AI features that it first revealed in 2024. Apple Intelligence, which is a key tenet of the company’s hardware products, had a rollout marred by delays and underwhelming features.
Apple spent most of WWDC going over smaller machine learning features, but did not reveal what investors and consumers increasingly want: A sophisticated Siri that can converse fluidly and get stuff done, like making a restaurant reservation. In the age of OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini, the expectation of AI assistants among consumers is growing beyond “Siri, how’s the weather?”
The company had previewed a significantly improved Siri in the summer of 2024, but earlier this year, those features were delayed to sometime in 2026. At WWDC, Apple didn’t offer any updates about the improved Siri beyond that the company was “continuing its work to deliver” the features in the “coming year.” Some observers reduced their expectations for Apple’s AI after the conference.
“Current expectations for Apple Intelligence to kickstart a super upgrade cycle are too high, in our view,” wrote Jefferies analysts this week.
Siri should be an example of how Apple’s ability to improve products and projects over the long-term makes it tough to compete with.
It beat nearly every other voice assistant to market when it first debuted on iPhones in 2011. Fourteen years later, Siri remains essentially the same one-off, rigid, question-and-answer system that struggles with open-ended questions and dates, even after the invention in recent years of sophisticated voice bots based on generative AI technology that can hold a conversation.
Apple’s strongest rivals, including Android parent Google, have done way more to integrate sophisticated AI assistants into their devices than Apple has. And Google doesn’t have the same reflex against collecting data and cloud processing as privacy-obsessed Apple.
Some analysts have said they believe Apple has a few years before the company’s lack of competitive AI features will start to show up in device sales, given the company’s large installed base and high customer loyalty. But Apple can’t get lapped before it re-enters the race, and its former design guru Jony Ive is now working on new hardware with OpenAI, ramping up the pressure in Cupertino.
“The three-year problem, which is within an investment time frame, is that Android is racing ahead,” Needham senior internet analyst Laura Martin said on CNBC this week.
Apple’s services success with projects like “F1” is an example of what the company can do when it sets clear goals in public and then executes them over extended time-frames.
Its AI strategy could use a similar long-term plan, as customers and investors wonder when Apple will fully embrace the technology that has captivated Silicon Valley.
Wall Street’s anxiety over Apple’s AI struggles was evident this week after Bloomberg reported that Apple was considering replacing Siri’s engine with Anthropic or OpenAI’s technology, as opposed to its own foundation models.
The move, if it were to happen, would contradict one of Apple’s most important strategies in the Cook era: Apple wants to own its core technologies, like the touchscreen, processor, modem and maps software, not buy them from suppliers.
Using external technology would be an admission that Apple Foundation Models aren’t good enough yet for what the company wants to do with Siri.
“They’ve fallen farther and farther behind, and they need to supercharge their generative AI efforts” Martin said. “They can’t do that internally.”
Apple might even pay billions for the use of Anthropic’s AI software, according to the Bloombergreport. If Apple were to pay for AI, it would be a reversal from current services deals, like the search deal with Alphabet where the Cupertino company gets paid $20 billion per year to push iPhone traffic to Google Search.
The company didn’t confirm the report and declined comment, but Wall Street welcomed the report and Apple shares rose.
In the world of AI in Silicon Valley, signing bonuses for the kinds of engineers that can develop new models can range up to $100 million, according to OpenAI CEO Sam Altman.
“I can’t see Apple doing that,” Martin said.
Earlier this week, Meta CEO Mark Zuckerberg sent a memo bragging about hiring 11 AI experts from companies such as OpenAI, Anthropic, and Google’s DeepMind. That came after Zuckerberg hired Scale AI CEO Alexandr Wang to lead a new AI division as part of a $14.3 billion deal.
Meta’s not the only company to spend hundreds of millions on AI celebrities to get them in the building. Google spent big to hire away the founders of Character.AI, Microsoft got its AI leader by striking a deal with Inflection and Amazon hired the executive team of Adept to bulk up its AI roster.
Apple, on the other hand, hasn’t announced any big AI hires in recent years. While Cook rubs shoulders with Pitt, the actual race may be passing Apple by.
Tesla CEO Elon Musk speaks alongside U.S. President Donald Trump to reporters in the Oval Office of the White House on May 30, 2025 in Washington, DC.
Kevin Dietsch | Getty Images
Tesla CEO Elon Musk, who bombarded President Donald Trump‘s signature spending bill for weeks, on Friday made his first comments since the legislation passed.
Musk backed a post on X by Sen. Rand Paul, R-Ky., who said the bill’s budget “explodes the deficit” and continues a pattern of “short-term politicking over long-term sustainability.”
The House of Representatives narrowly passed the One Big Beautiful Bill Act on Thursday, sending it to Trump to sign into law.
Paul and Musk have been vocal opponents of Trump’s tax and spending bill, and repeatedly called out the potential for the spending package to increase the national debt.
The independent Congressional Budget Office has said the bill could add $3.4 trillion to the $36.2 trillion of U.S. debt over the next decade. The White House has labeled the agency as “partisan” and continuously refuted the CBO’s estimates.
Read more CNBC tech news
The bill includes trillions of dollars in tax cuts, increased spending for immigration enforcement and large cuts to funding for Medicaid and other programs.
It also cuts tax credits and support for solar and wind energy and electric vehicles, a particularly sore spot for Musk, who has several companies that benefit from the programs.
“I took away his EV Mandate that forced everyone to buy Electric Cars that nobody else wanted (that he knew for months I was going to do!), and he just went CRAZY!” Trump wrote in a social media post in early June as the pair traded insults and threats.
Shares of Tesla plummeted as the feud intensified, with the company losing $152 billion in market cap on June 5 and putting the company below $1 trillion in value. The stock has largely rebounded since, but is still below where it was trading before the ruckus with Trump.
Stock Chart IconStock chart icon
Tesla one-month stock chart.
— CNBC’s Kevin Breuninger and Erin Doherty contributed to this article.