Sergey Brin, president of Alphabet and co-founder of Google
David Paul Morris | Bloomberg | Getty Images
Google co-founder Sergey Brin, in a rare public appearance over the weekend, told a group of artificial intelligence enthusiasts that he came out of retirement “because the trajectory of AI is so exciting.”
Brin, 50, spoke to entrepreneurs on Saturday at the “AGI House” in Hillsborough, California, just south of San Francisco, where developers and founders were testing Google’s Gemini model. AGI stands for artificial general intelligence and refers to a form of AI that can complete tasks to the same level, or a step above, humans.
In taking questions from the crowd, Brin discussed AI’s impact on search and how Google can maintain its leadership position in its core market as AI continues to grow. He also commented on the flawed launch last month of Google’s image generator, which the company pulled after users discovered historical inaccuracies and questionable responses.
“We definitely messed up on the image generation,” Brin said on Saturday. “I think it was mostly due to just not thorough testing. It definitely, for good reasons, upset a lot of people.”
Google said last week that it plans to relaunch the image generation feature soon.
Brin co-founded Google with Larry Page in 1998, but stepped down as president of Alphabet in 2019. He remains a board member and a principal shareholder, with a stake in the company worth about $100 billion. He’s returned to work at the company as part of an effort to help ramp up Google’s position in the hypercompetitive AI market.
In some cases on Saturday, Brin said he was giving “personal” answers, as opposed to representing the company.
“Seeing what these models can do year after year is astonishing,” he said at the event, a recording of which was viewed by CNBC.
Regarding the recent challenges with Gemini that led to flawed image results, Brin said the company isn’t quite sure why responses have a leftward tilt, in the political sense.
“We haven’t fully understood why it leans left in many cases” but “that’s not our intention,” he said. The company has recently made accuracy improvements by as much as 80% on certain internal tests, Brin added.
Brin’s comments represent the first time a company executive has spoken on the Gemini matter in a live setting. The company previously sent prepared statements from Prabhakar Raghavan, Google’s head of search, and CEO Sundar Pichai in response to the controversial rollout.
Here’s what Raghavan said in a blog post on Feb. 23:
“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive. These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.”
Google declined to comment for this story. Brin didn’t immediately respond to a request for comment.
‘Some pretty weird things’
Brin said Google is far from alone in its struggles to produce accurate results with AI. He cited OpenAI’s ChatGPT and Elon Musk’s Grok services as AI tools that, “say some pretty weird things that are out there that definitely feel far left, for example.”
Hallucinations, or false responses to a user’s prompt, are still “a big problem right now,” he said. “No question about it.”
“We have made them hallucinate less and less over time, but I’d definitely be excited to see a breakthrough that’s near-zero,” Brin said. “But you can’t just like — count on breakthroughs so I think we’re just going to keep doing the incremental things we do to bring it down, down, down over time.”
When asked by an attendee if he wants to build AGI, Brin answered in the affirmative, citing the ability for AI to help with “reasoning.”
Brin was also asked how online advertising will be disrupted considering ad revenue is core to Google’s business. The company has reported slowing ad growth in the last few years.
Sergey Brin, Google Inc. co-founder, left, Larry Page, Google Inc. co-founder, center, and Eric Schmidt, Google Inc. chairman and chief executive officer, attend a news conference inside the Sun Valley Inn at the 28th annual Allen & Co. Media and Technology Conference in Sun Valley, Idaho, U.S., on Thursday, July 8, 2010.
Bloomberg | Bloomberg | Getty Images
“I of all people am not too terribly concerned about business model shifts,” Brin said. “I think it’s wonderful that we’ve been now for 25 years, or whatever, able to give just world class information search for free to everyone and that’s supported by advertising, which in my mind is great for the world.”
He did acknowledge that the business is likely to change.
“I expect business models are going to evolve over time,” he said. “And maybe it will still be advertising because advertising could work better, the AI is able to better tailor it.”
Brin is confident in Google’s position.
“I personally feel as long as there’s huge value being generated, we’ll figure out the business models,” he said.
Beyond AI, Brin was asked about Google’s difficulties in hardware given recent advancements in virtual reality. Google was notoriously early to the augmented reality market with the now-defunct Google Glass.
“I feel like I made some bad decisions,” he said, referring to Google Glass. If he were doing it differently, Brin said, he would have the treated Google Glass as a prototype instead of a product. “But, I’m still a fan of the lightweight” form, he said.
In regards to the Apple Vision Pro and Meta’s Quest headsets, Brin said, “They’re very impressive.”
When asked about how he sees Gemini impacting spatial computing or products like Google Maps or Street view, Brin responded with as much curiosity as anything.
“To be honest, I haven’t thought about it, but now that you say it, yeah there’s no reason we couldn’t put in more 3D data,” Brin said, to laughs from the crowd. “Maybe somebody’s doing it at Gemini — I don’t know.”
Thomas Fuller | SOPA Images | Lightrocket | Getty Images
Video generation startup Luma AI said it raised $900 million in a new funding round led by Humain, an artificial intelligence company owned by Saudi Arabia’s Public Investment Fund.
The financing, which included participation from Advanced Micro Devices’ venture arm and existing investors Andreessen Horowitz, Amplify Partners and Matrix Partners, was announced at the U.S.-Saudi Investment Forum on Wednesday.
The company is now valued upwards of $4 billion, CNBC has confirmed.
Luma develops multimodal “world models” that are able to learn from not only text, but also video, audio and images in order to simulate reality. CEO Amit Jain told CNBC in an interview that these models expand beyond large language models, which are solely trained on text, to be more effective in “helping in the real, physical world.”
“With this funding, we plan to scale our and accelerate our efforts in training and then deploying these world models today,” Jain said.
Luma released Ray3 in September, the first reasoning video model that can interpret prompts to create videos, images and audio. Jain said Ray3 currently benchmarks higher than OpenAI’s Sora 2 and around the same level as Google’sVeo 3.
Humain, which was launched in May, is aiming to deliver full-stack AI capabilities to bolster Saudi Arabia’s position as a global AI hub. The company is led by industry veteran Tareq Amin, who previously ran Aramco Digital and before that was CEO of Rakuten Mobile.
Luma and Humain will also partner to build a 2-gigawatt AI supercluster, dubbed Project Halo, in Saudi Arabia. The buildout will be one of the one of the largest deployments of graphic processing units (GPUs) in the world, Jain said.
Major tech companies have been investing in supercomputers across the globe to train massive AI models. In July, Meta announced plans to build a 1-gigawatt supercluster called Prometheus, and Microsoft deployed the first supercomputing cluster using the Nvidia GB300 NVL72 platform in October.
“Our investment in Luma AI, combined with HUMAIN’s 2GW supercluster, positions us to train, deploy, and scale multimodal intelligence at a frontier level,” Amin said in a release. “This partnership sets a new benchmark for how capital, compute, and capability come together.”
The collaboration also includes Humain Create, an initiative to create sovereign AI models trained on Arabic and regional data. Along with focusing on building the world’s first Arabic video model, Jain said Luma models and capabilities will be deployed to Middle Eastern businesses.
He added that since most models are trained by scraping data from the internet, countries outside the U.S. and Asia are often less represented in AI-generated content.
“It’s really important that we bring these cultures, their identities, their representation — visual and behavioral and everything — to our model,” Jain said.
AI-generated content tools have received significant backlash over the past year from entertainment studios over copyright concerns. Luma’s flagship text-to-video platform Dream Machine garnered some accusations of copying IP earlier this year, but Jain the company has installed safeguards to prevent unwanted usage.
“Even if you really try to trick it, we are constantly improving it,” he said. “We have built very robust systems that are actually using models we trained to detect them.”
Nvidia founder and CEO Jensen Huang reacts during a press conference at the Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju on October 31, 2025.
Jung Yeon-je | Afp | Getty Images
Artificial intelligence chipmaker Nvidia is scheduled to report fiscal third-quarter earnings on Wednesday after the market closes.
Here’s what Wall Street is expecting, per LSEG consensus estimates:
EPS: $1.25
Revenue: $54.92 billion
Wall Street is expecting the chipmaker to guide in the current quarter to $1.43 in earnings per share on $61.66 billion of revenue. Nvidia typically provides one quarter of revenue guidance.
Anything Nvidia or CEO Jensen Huang says about the company’s outlook and its sales backlog will be closely scrutinized.
Nvidia is at the center of the AI boom, and it counts counts every major cloud company and AI lab as a customer. All of the major AI labs use Nvidia chips to develop next-generation models, and a handful of companies called hyperscalers have committed hundreds of billions of dollars to construct new data centers around Nvidia technology in unprecedented build-outs.
Last month, Huang said Nvidia had $500 billion in chip orders in calendar 2025 and 2026, including the forthcoming Rubin chip, which will start shipping in volume next year. Analysts will want to know more about what Nvidia sees coming from the AI infrastructure world next year, because all five of the top AI model developers in the U.S. use the company’s chips.
As of Tuesday, analysts polled by LSEG expect Nvidia’s sales to rise 39% in the company’s fiscal 2027, which starts in early 2026.
Investors will want to hear about Nvidia’s equity deals with customers and suppliers, including an agreement to invest in OpenAI, a deal with Nokia and an investment into former rival Intel. Nvidia has kept its pace of deal-making up, agreeing to invest $10 billion into AI company Anthropic earlier this week.
Nvidia management will also be asked about China, and the possibility that the company could gain licenses from the U.S. government to export a version of its current-generation Blackwell AI chip to the country. Analysts say Nvidia’s sales could get a boost of as much as $50 billion per year if it is allowed to sell current-generation chips to Chinese companies.
Perplexity on Wednesday announced it will roll out a free agentic shopping product for U.S. users next week, as consumers ramp up spending for the holiday season.
“The agentic part is the seamless purchase right from the answer,” Dmitry Shevelenko, Perplexity’s chief business officer, told CNBC in an interview. “Most people want to still do their own research. They want that streamlined and simplified, and so that’s the part that is agentic in this launch.”
The artificial intelligence startup has partnered with PayPal ahead of the launch, and users will eventually be able to directly purchase items from more than 5,000 merchants through Perplexity’s search engine.
Perplexity initially released a shopping offering called “Buy With Pro” for its paid subscribers late last year. The company said its new free product will be better at detecting shopping intent and will deliver more personalized results by drawing on memory from a user’s previous searches.
Perplexity declined to share whether it will earn revenue from transactions that are completed through its platform.
The startup’s competitor OpenAI announced a similar e-commerce feature called Instant Checkout in September, which allows ChatGPT users to buy items from merchants without leaving the chatbot’s interface. OpenAI has said it will take a fee from those purchases.
Read more CNBC tech news
Etsy and Shopify were named as OpenAI’s initial partners for Instant Checkout, but it also inked a deal with PayPal late last month.
Starting next year, PayPal users will be able to buy items, and PayPal merchants will be able to sell items through ChatGPT.
Michelle Gill, who leads PayPal’s agentic strategy, said the company has been building out infrastructure and protections as AI ushers in the “next era of commerce.”
Part of that means keeping consumers and merchants connected to PayPal as they engage on new platforms like Perplexity, she said.
Perplexity said PayPal merchants will serve as the merchants of record through its agentic shopping product, which will allow them to handle processes like purchases, customer service and returns directly.
Through its “Buy With Pro” offering, Perplexity had served as the intermediary that completed purchases.
Gill said PayPal’s buyer protection policies, which can help users get reimbursed if there are problems with their orders, will also apply to transactions on Perplexity.
“We’re really excited about this launch because we will see it come to life during a period that’s so organic for people to shop,” Gill said in an interview.