Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, on Wednesday, May 10, 2023.
David Paul Morris | Bloomberg | Getty Images
Google’s new large language model, which the company announced last week, uses almost five times as much training data as its predecessor from 2022, allowing its to perform more advanced coding, math and creative writing tasks, CNBC has learned.
PaLM 2, the company’s new general-use large language model (LLM) that was unveiled at Google I/O, is trained on 3.6 trillion tokens, according to internal documentation viewed by CNBC. Tokens, which are strings of words, are an important building block for training LLMs, because they teach the model to predict the next word that will appear in a sequence.
Google’s previous version of PaLM, which stands for Pathways Language Model, was released in 2022 and trained on 780 billion tokens.
While Google has been eager to showcase the power of its artificial intelligence technology and how it can be embedded into search, emails, word processing and spreadsheets, the company has been unwilling to publish the size or other details of its training data. OpenAI, the Microsoft-backed creator of ChatGPT, has also kept secret the specifics of its latest LLM called GPT-4.
The reason for the lack of disclosure, the companies say, is the competitive nature of the business. Google and OpenAI are rushing to attract users who may want to search for information using conversational chatbots rather than traditional search engines.
But as the AI arms race heats up, the research community is demanding greater transparency.
Since unveiling PaLM 2, Google has said the new model is smaller than prior LLMs, which is significant because it means the company’s technology is becoming more efficient while accomplishing more sophisticated tasks. PaLM 2, according to internal documents, is trained on 340 billion parameters, an indication of the complexity of the model. The initial PaLM was trained on 540 billion parameters.
Google didn’t immediately provide a comment for this story.
Google said in a blog post about PaLM 2 that the model uses a “new technique” called “compute-optimal scaling.” That makes the the LLM “more efficient with overall better performance, including faster inference, fewer parameters to serve, and a lower serving cost.”
In announcing PaLM 2, Google confirmed CNBC’s previous reporting that the model is trained on 100 languages and performs a broad range of tasks. It’s already being used to power 25 features and products, including the company’s experimental chatbot Bard. It’s available in four sizes, from smallest to largest: Gecko, Otter, Bison and Unicorn.
PaLM 2 is more powerful than any existing model, based on public disclosures. Facebook’s LLM called LLaMA, which it announced in February, is trained on 1.4 trillion tokens. The last time OpenAI shared ChatGPT’s training size was with GPT-3, when the company said it was trained on 300 billion tokens at the time. OpenAI released GPT-4 in March, and said it exhibits “human-level performance” on many professional tests.
LaMDA, a conversation LLM that Google introduced two years ago and touted in February alongside Bard, was trained on 1.5 trillion tokens, according to the latest documents viewed by CNBC.
As new AI applications quickly hit the mainstream, controversies surrounding the underlying technology are getting more spirited.
El Mahdi El Mhamdi, a senior Google Research scientist, resigned in February over the company’s lack of transparency. On Tuesday, OpenAI CEO Sam Altman testified at a hearing of the Senate Judiciary subcommittee on privacy and technology, and agreed with lawmakers that a new system to deal with AI is needed.
“For a very new technology we need a new framework,” Altman said. “Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world.”
Several AI applications can be seen on a smartphone screen, including ChatGPT, Claude, Gemini, Perplexity, Microsoft Copilot, Meta AI, Grok and DeepSeek.
Philip Dulian | Picture Alliance | Getty Images
Money keeps flowing into artificial intelligence companies but out of AI stocks.
In what looks like — once again — a scenario of the left hand scratching the right, Microsoft and Nvidia will be investing a combined $15 billion into Anthropic, while the OpenAI competitor has committed to buying compute power from its two newest stakeholders. At this point, it seems as if a big proportion of AI news can be summarized as: “Company X invests in Company Y, and Company Y will buy things from Company X.”
Okay, that’s unfair. There are a lot of developments in the AI world that are not about investments but, well, development. Google unveiled the third version of Gemini, its AI model, which Demis Hassabis, CEO of Google’s AI unit DeepMind, said “will be “trading cliché and flattery for genuine insight.” (But I still want an AI chatbot to compliment me on my curiosity when I ask how to cut a pear, so I’m not sure if that’s a pro for me.)
Investors, however, still appear skeptical about AI. Major names such as Nvidia, Amazon and Microsoft tumbled Tuesday stateside, giving the S&P 500 its fourth straight session in the red — the longest decline since August.
And if Nvidia — “the top company within the top industry within the top sector,” as CFRA’s chief investment strategist Sam Stovall puts it — fails to satisfy investors’ expectations when it reports earnings Wednesday, we might be seeing the S&P 500’s slide extend.
Anthropic signs deal with Microsoft and Nvidia. Microsoft announced Tuesday it will invest up to $5 billion in the startup, while Nvidia will put in up to $10 billion. That puts Anthropic’s valuation around $350 billion, according to a source.
Google announces its latest AI model Gemini 3. Alphabet CEO Sundar Pichai said Tuesday it will require “less prompting” for desired answers. The update comes eight months after Google introduced Gemini 2.5, and will be rolled out in the coming weeks.
A Tesla Inc. robotaxi on Oltorf Street in Austin, Texas, on June 22, 2025.
Tim Goessman | Bloomberg | Getty Images
Tesla has obtained a permit to operate a ride-hailing service in Arizona, the state’s department of transportation said.
The electric vehicle company applied for a “transportation network company” permit on Nov. 13, and was approved on Monday, ADOT said in an emailed statement. Additional permits will be required before Tesla can operate a robotaxi service in Arizona.
In July, Tesla applied to conduct autonomous vehicle testing and operations in Phoenix, with and without human safety drivers on board. A month earlier, Tesla started a robotaxi pilot in Austin, Texas, with safety valets and remote operators. Tesla also operates a more traditional car service in the San Francisco Bay Area.
Tesla didn’t immediately respond to a request for comment.
Tesla plans to take human safety drivers out of its cars in Austin before the end of this year. The company is aiming to operate a commercial robotaxi service in Phoenix and several other U.S. cities before the end of 2026.
According to the National Highway Traffic Safety Administration’s website, Tesla cars equipped with automated driving systems were involved in seven reported collisions following the launch of the company’s pilot in Texas.
Competitors including Alphabet’s Waymo in the U.S. and Baidu’s Apollo Go in China are way ahead in the nascent robotaxi ride-hailing market. In the Phoenix area, Waymo operates a sizable commercial business, with at least 400 autonomous vehicles, the company previously told CNBC. In May, Waymo said it had surpassed 10 million driverless trips served to riders across the U.S.
Baidu said in an earnings update on Tuesday that its Apollo Go service “provided 3.1 million fully driverless operational rides in the third quarter of 2025,” representing year-over-year growth of 212%.
Musk has been promising that Tesla will “solve” autonomy for years without reaching its goals. The world’s richest person has continued with the lofty pronouncements.
At the company’s 2025 shareholder meeting earlier this month, Musk said the “killer app” for self-driving technology is when people can “text and drive,” or “sleep and drive.”
“Before we allow the car to be driven without paying attention, we need to make sure it’s very safe,” Musk said. “We’re on the cusp of that. I know I’ve said that a few times. We really are at this point.”
Money keeps flowing into artificial intelligence companies but out of AI stocks.
In what looks like — once again — a scenario of the left hand scratching the right, Microsoft and Nvidia will be investing a combined $15 billion into Anthropic, while the OpenAI competitor has committed to buying compute power from its two newest stakeholders. At this point, it seems as if a big proportion of AI news can be summarized as: “Company X invests in Company Y, and Company Y will buy things from Company X.”
Okay, that’s unfair. There are a lot of developments in the AI world that are not about investments but, well, development. Google unveiled the third version of Gemini, its AI model, which Demis Hassabis, CEO of Google’s AI unit DeepMind, said “will be “trading cliché and flattery for genuine insight.” (But I still want an AI chatbot to compliment me on my curiosity when I ask how to cut a pear, so I’m not sure if that’s a pro for me.)
Investors, however, still appear skeptical about AI. Major names such as Nvidia, Amazon and Microsoft tumbled Tuesday stateside, giving the S&P 500 its fourth straight session in the red — the longest decline since August.
And if Nvidia — “the top company within the top industry within the top sector,” as CFRA’s chief investment strategist Sam Stovall puts it — fails to satisfy investors’ expectations when it reports earnings Wednesday, we might be seeing the S&P 500’s slide extend.
Anthropic signs deal with Microsoft and Nvidia. Microsoft announced Tuesday it will invest up to $5 billion in the startup, while Nvidia will put in up to $10 billion. That puts Anthropic’s valuation around $350 billion, according to a source.
Google announces its latest AI model Gemini 3. Alphabet CEO Sundar Pichai said Tuesday it will require “less prompting” for desired answers. The update comes eight months after Google introduced Gemini 2.5, and will be rolled out in the coming weeks.
[PRO] Potentially resilient stocks amid AI slump. There are some global stocks and non-equity assets that could weather the turbulence in U.S. tech names happening recently, strategists told CNBC.
Miffed over Japanese Prime Minister Sanae Takaichi’s comments related to Taiwan, China on Friday advised its citizens against travelling to the country. Japanese tourism-exposed stocks fell in the aftermath of that warning, while experts caution the impact could be more severe over a longer duration.
Takahide Kiuchi, executive economist at Nomura Research Institute, said tensions between the two Asian powers could result in a 1.79 trillion yen drop in Japan’s GDP over the course of one year — a 0.29% decline in the country’s GDP.