Ruth Porat, chief financial officer of Alphabet Inc., speaks during a news conference at Michigan Central Station in Detroit, Michigan, on Friday, Feb. 4, 2022.
Jeff Kowalsky | Bloomberg | Getty Images
A string of Google executives have changed their roles in the span of several months, in a shift that has sidelined many of company’s remaining old guard.
The changes encompass high-profile executives such as finance chief Ruth Porat, YouTube CEO Susan Wojcicki and employee No. 8, Urs Hölzle, among others. Some say they have left their roles for a new challenge and others have left to seek opportunities in artificial intelligence.
In February, Wojcicki — one of the most prominent women in Silicon Valley — announced that she was stepping back after nine years at the helm of the Google-owned platform that grew to be the world’s most popular video service. She had been at Google for more than 25 years, after famously lending her garage to Google founders Sergey Brin and Larry Page to use as their first office.
While she’ll still be in an advisory role at Google, she said she wanted to “start a new chapter.”
Wojcicki wasn’t the only executive to leave YouTube. Robert Kyncl, the chief business officer for 12 years, stepped away to become CEO of Warner Music Group at the beginning of the year.
In March, CapitalG founder and longtime Google employee David Lawee stepped down from his role after 17 years at Alphabet, saying he wanted to explore new areas of interest and spend more time with his family.
Hölzle, who has long overseen Google’s technical infrastructure and was its eighth employee, said he would be stepping back from management after 24 years of leading technical teams, CNBC reported in July. Hölzle will be classified as an “individual contributor,” which means he will be working independently and no longer managing employees.
Also in July, Porat announced that she will step down as Alphabet‘s chief financial officer after eight years and take a new role as president and chief investment officer. When asked about the timing of the move, Porat, who was previously Morgan Stanley‘s CFO, said she wanted to take on a different set of challenges.
Porat will also be engaged with policymakers to “recognize the importance of technology” and on issues including employment, economic, competitiveness and infrastructure expansion,” the company said.
“We have a steady and experienced leadership team, many of whom have been with the company for well over a decade, ” said Google spokesperson Courtenay Mencini in statement about the shifts. “We also have a strong bench of leaders at Google who can smoothly transition when people who’ve had long and successful careers here decide to pursue new opportunities inside and outside the company.”
Searching for itself in an AI-first world
As Google looks for replacements for executives like Porat, it’s also searching for its own identity in a pivotal moment in the company’s history.
The company was caught flat-footed last fall when OpenAI launched its AI-powered chatbot ChatGPT, and suddenly found itself in a rare spot where its core search business was threatened.
Industry observers wondered if users could simply get answers from an AI-powered chatbot, how long would they keep entering queries into a search engine? It was an ironic moment for the search giant, given that CEO Sundar Pichai had been talking up the company’s “AI-first” strategy since 2016, with little to show externally.
In June, Google execs admitted to employees that users are “still not quite happy” with the search experience, CNBC reported. Search boss Prabhakar Raghavan and engineering VP HJ Kim spent several minutes pledging to do a better job to employees while Pichai noted that it’s still the most trusted search engine.
Geoffrey Hinton, known as “The godfather of AI” and one of the most respected voices in the field, told The New York Times in May that he was leaving the company after a decade to warn the world about the potential threat of AI, which he said is coming sooner than he previously thought.
Shortly before that, amid a reorganization in Google’s AI teams, the company promoted the CEO of its DeepMind subsidiary, Demis Hassabis, to lead AI for the entire company, and former McKinsey exec James Manyika to become Google’s senior vice president of technology and society and to oversee Google Research.
Google’s AI head, Jeff Dean, who’s been at the company since 1999, became a chief scientist as part of the change. The company called it a promotion, but it effectively took him out of a large leading role in AI to be an individual contributor, reportedly helping oversee Gemini, one of its critical large language models.
The company is also cutting costs, another rarity, while the core search product faces changing user behavior, ad pullbacks and an AI boom that requires increasing investment, all amid a slowing economy and investor calls to reduce spending.
It’s also staring down multiple federal lawsuits, including an imminent antitrust trial set to begin in September that alleges Google illegally maintained a monopoly by cutting off rivals from search distribution channels.
More like other big companies, some employees say
Employees’ perceptions of the company have also changed in recent years.
While potential employees still consider Google a top place to work with extremely competitive perks, it has grown to be more bureaucratic than in its earlier days.
This perception shift has created a “fragile moment” for Google amid the pressure from OpenAI and Microsoft, argued former Google employee Praveen Seshadri in a Medium post that went viral earlier this year.
“I have left Google understanding how a once-great company has slowly ceased to function,” wrote Seshadri in his blog post that detailed the challenges of Google’s growing bureaucracy.
“Like mice, they are trapped in a maze of approvals, launch processes, legal reviews, performance reviews, exec reviews, documents, meetings, bug reports, triage, OKRs, H1 plans followed by H2 plans, all-hands summits, and inevitable reorgs.”
Former Waze CEO Noam Bardin, who quit Google in 2021, shared Seshadri’s post on LinkedIn. In a blog post a couple years earlier, Bardin had written that employees aren’t incentivized to build Google products.
“The problem was me — believing I can keep the startup magic within a corporation, in spite of all the evidence showing the opposite,” he wrote in his critique of the company.
Like Seshadri and Bardin, a number of AI specialists have left the company, saying it had grown too bureaucratic to get things done.
Eight AI researchers who created “Transformers,” an integral part of the infrastructure behind ChatGPT and other chatbots, have left the search giant since 2017 — many of them going on to start their own companies. Five of them left in 2021 alone.
Llion Jones, who departed Google this month to start his own company focused on AI, told CNBC’s Jordan Novet, “the bureaucracy had built to the point where I just felt like I couldn’t get anything done.”
Other AI researchers at Google have made similar complaints in recent months. Several have gone on to start their own companies focused on AI, where they have more agency over vision and speed.
In February, longtime product exec Clay Bavor said after 18 “wonderful years” at Google, he was leaving to start an artificial intelligence company with former Salesforce co-CEO Bret Taylor. “We share an obsession with recent advances in AI, and we’re excited to build a new company to apply AI to solve some of the most important problems in business,” Bavor wrote at the time.
“We’ve made intentional efforts throughout the year to move quickly with nimble teams,” said Google spokesperson Courtenay Mencini. “For instance, products like Bard and SGE [Search Generative Experience] are being developed by small, fast-moving teams that have been built for these high-priority efforts.”
Despite its efforts, the company faced criticism from investors and its own employees when it quickly tried to announce its ChatGPT competitor Bard, which it started opening up to the wider public in March. While the rollout’s reputation has rebounded after several updates and a successful developer conference, the company still has yet to launch SGE to the wider public.
The company has also become less flexible as it strives to get employees back into the office.
Google recently cracked down on its hybrid three-day-a-week office policy to include badge tracking, and noted attendance will be included in performance reviews, CNBC previously reported. Additionally, employees who already received approval for remote work may now have that status reevaluated.
There’s also a new emphasis on cost-cutting that has taken some employees by surprise.
Even if the company had been considered slower moving, at least it had been considered secure — commonly known as a place where employees could “rest and vest.” That changed with the company’s first-ever mass layoffs in January, where Alphabet abruptly announced it was eliminating about 12,000 jobs, or 6% of its workforce, in an overnight email. Some employees reportedly arrived at work to discover their badges no longer worked. It then declined to pay out the remainder of employees’ approved leave time.
While the company included competitive severance packages, some employees lost trust in leadership, who had long encouraged employees to be kind, humble and open-minded, or “Googley.”
The company has also reduced spending on real estate, even asking employees in its cloud unit to share desks. It’s also cut down on desktop PCs and equipment refreshes for employees. It started cutting travel and events late last year.
In an all-hands meeting last September, employees voted to ask Pichai why the company is “nickel-and-diming employees” with some of its cutbacks on perks and travel.
Google’s culture can still be enjoyable even if some things, like certain swag items, are getting taken away, the CEO argued.
“I remember when Google was small and scrappy,” Pichai said. “We shouldn’t always equate fun with money. I think you can walk into a hardworking startup and people may be having fun and it shouldn’t always equate to money.”
Pichai’s statement touched a nerve. Yes, many people joined Google so their work would immediately have an impact of many more users than other companies. It’s still considered one of the top places to work, with opportunities to tackle some of the industry’s biggest problems. But, alongside all that, money and perks had flowed generously, regardless of the speed at which projects moved.
Now, the company faces its biggest challenge yet, which falls on the shoulders of Pichai and the next guard — trying to recreate the magic of its early days along with delivering revenue while being under more pressure than ever.
Larry Ellison, Oracle’s co-founder and chief technology officer, appears at the Formula One British Grand Prix in Towcester, U.K., on July 6, 2025.
Jay Hirano | Sopa Images | Lightrocket | Getty Images
Oracle is scheduled to report fiscal second-quarter results after market close on Wednesday.
Here’s what analysts are expecting, according to LSEG:
Earnings per share: $1.64 adjusted
Revenue: $16.21 billion
Wall Street expects revenue to increase 15% in the quarter that ended Nov. 30, from $14.1 billion a year earlier. Analysts polled by StreetAccount are looking for $7.92 billion in cloud revenue and $6.06 billion from software.
The report lands at a critical moment for Oracle, which has tried to position itself at the center of the artificial intelligence boom by committing to massive build-outs. While the move has been a boon for Oracle’s revenue and its backlog, investors have grown concerned about the amount of debt the company is raising and the risks it faces should the AI market slow.
The stock plummeted 23% in November, its worst monthly performance since 2001 and, as of Tuesday’s close, is 33% below its record reached in September. Still, the shares are up 33% for the year, outperforming the Nasdaq, which has gained 22% over that stretch.
Over the past decade, Oracle has diversified its business beyond databases and enterprise software and into cloud infrastructure, where it competes with Amazon, Microsoft and Google. Those companies are all vying for big AI contracts and are investing heavily in data centers and hardware necessary to meet expected demand.
OpenAI, which sparked the generative AI rush with the launch of ChatGPT three years ago, has committed to spending more than $300 billion on Oracle’s infrastructure services over five years.
“Oracle’s job is not to imagine gigawatt-scale data centers. Oracle’s job is to build them,” Larry Ellison, the company’s co-founder and chairman, told investors in September.
Oracle raised $18 billion during the period, one of the biggest issuances on record for a tech company. Skeptical investors have been buying five-year credit default swaps, driving them to multiyear highs. Credit default swaps are like insurance for investors, with buyers paying for protection in case the borrower can’t repay its debt.
“Customer concentration is a major issue here, but I think the bigger thing is, How are they going to pay for this?” said RBC analyst Rishi Jaluria, who has the equivalent of a hold rating on Oracle’s stock.
During the quarter, Oracle named executives Clay Magouyrk and Mike Sicilia as the company’s new CEOs, succeeding Safra Catz. Oracle also introduced AI agents for automating various facets of finance, human resources and sales.
Executives will discuss the results and issue guidance on a conference call starting at 5 p.m. ET.
The U.S. has banned the export of Nvidia’s Blackwell chips, which are considered the company’s most advanced offerings, to China in an effort to stay ahead in the AI race.
DeepSeek is reportedly using chips that were snuck into the country without authorization, according to The Information.
“We haven’t seen any substantiation or received tips of ‘phantom datacenters’ constructed to deceive us and our OEM partners, then deconstructed, smuggled, and reconstructed somewhere else,” a Nvidia spokesperson said in a statement. “While such smuggling seems farfetched, we pursue any tip we receive.”
Read more CNBC tech news
Nvidia has been one of the biggest winners of the AI boom so far because it develops the graphics processing units (GPUs) that are key for training models and running large workloads.
Since the hardware is so crucial for advancing AI technology, Nvidia’s relationship with China has become a political flashpoint among U.S. lawmakers.
President Donald Trump on Monday said Nvidia can ship its H200 chips to “approved customers” in China and elsewhere on the condition that the U.S. will get 25% of those sales.
The announcement was met with pushback from some Republicans.
DeepSeek spooked the U.S. tech sector in January when it released a reasoning model, called R1, that rocketed to the top of app stores and industry leaderboards. R1 was also created at a fraction of the cost of other models in the U.S., according to some analyst estimates.
In August, DeepSeek hinted that China will soon have its own “next generation” chips to support its AI models.
The Starcloud-1 satellite is launched into space from a SpaceX rocket on November 2, 2025.
Courtesy: SpaceX | Starcloud
Nvidia-backed startup Starcloud trained an artificial intelligence model from space for the first time, signaling a new era for orbital data centers that could alleviate Earth’s escalating digital infrastructure crisis.
Last month, the Washington-based company launched a satellite with an Nvidia H100 graphics processing unit, sending a chip into outer space that’s 100 times more powerful than any GPU compute that has been in space before. Now, the company’s Starcloud-1 satellite is running and querying responses from Gemma, an open large language model from Google, in orbit, marking the first time in history that an LLM has been has run on a high-powered Nvidia GPU in outer space, CNBC has learned.
“Greetings, Earthlings! Or, as I prefer to think of you — a fascinating collection of blue and green,” reads a message from the recently launched satellite. “Let’s see what wonders this view of your world holds. I’m Gemma, and I’m here to observe, analyze, and perhaps, occasionally offer a slightly unsettlingly insightful commentary. Let’s begin!” the model wrote.
Starcloud’s output Gemma in space. Gemma is a family of open models built from the same technology used to create Google’s Gemini AI models.
Starcloud
Starcloud wants to show outer space can be a hospitable environment for data centers, particularly as Earth-based facilities strain power grids, consume billions of gallons of water annually and produce hefty greenhouse gas emissions. The electricity consumption of data centers is projected to more than double by 2030, according to data from the International Energy Agency.
Starcloud CEO Philip Johnston told CNBC that the company’s orbital data centers will have 10 times lower energy costs than terrestrial data centers.
“Anything you can do in a terrestrial data center, I’m expecting to be able to be done in space. And the reason we would do it is purely because of the constraints we’re facing on energy terrestrially,” Johnston said in an interview.
Johnston, who co-founded the startup in 2024, said Starcloud-1’s operation of Gemma is proof that space-based data centers can exist and operate a variety of AI models in the future, particularly those that require large compute clusters.
“This very powerful, very parameter dense model is living on our satellite,” Johnston said. “We can query, it and it will respond in the same way that when you query a chat from a database on Earth, it will give you a very sophisticated response. We can do that with our satellite.”
In a statement to CNBC, Google DeepMind product director Tris Warkentin said that “seeing Gemma run in the harsh environment of space is a testament to the flexibility and robustness of open models.”
In addition to Gemma, Starcloud was able to train NanoGPT, an LLM created by OpenAI founding member Andrej Karpathy, on the H100 chip using the complete works of Shakespeare. This led the model to speak in Shakespearean English.
Starcloud — a member of the Nvidia Inception program and graduate from Y Combinator and the Google for Startups Cloud AI Accelerator — plans to build a 5-gigawatt orbital data center with solar and cooling panels that measure roughly 4 kilometers in both width and height. A compute cluster of that gigawatt size would produce more power than the largest power plant in the U.S. and would be substantially smaller and cheaper than a terrestrial solar farm of the same capacity, according to Starcloud’s white paper.
These data centers in space would capture constant solar energy to power next-generation AI models, unhindered by the Earth’s day and night cycles and weather changes. Starcloud’s satellites should have a five-year lifespan given the expected lifetime of the Nvidia chips on its architecture, Johnston said.
Orbital data centers would have real-world commercial and military use cases. Already, Starcloud’s systems can enable real-time intelligence and, for example, spot the thermal signature of a wildfire the moment it ignites and immediately alert first responders, Johnston said.
“We’ve linked in the telemetry of the satellite, so we linked in the vital signs that it’s drawing from the sensors — things like altitude, orientation, location, speed,” Johnston said. “You can ask it, ‘Where are you now?’ and it will say ‘I’m above Africa and in 20 minutes, I’ll be above the Middle East.’ And you could also say, ‘What does it feel like to be a satellite? And it will say, ‘It’s kind of a bit weird’ … It’ll give you an interesting answer that you could only have with a very high-powered model.”
Starcloud is working on customer workloads by running inference on satellite imagery from observation company Capella Space, which could help spot lifeboats from capsized vessels at sea and forest fires in a certain location. The company will include several Nvidia H100 chips and integrate Nvidia’s Blackwell platform onto its next satellite launch in October 2026 to offer greater AI performance. The satellite launching next year will feature a module running a cloud platform from cloud infrastructure startup Crusoe, allowing customers to deploy and operate AI workloads from space.
“Running advanced AI from space solves the critical bottlenecks facing data centers on Earth,” Johnston told CNBC.
“Orbital compute offers a way forward that respects both technological ambition and environmental responsibility. When Starcloud-1 looked down, it saw a world of blue and green. Our responsibility is to keep it that way,” he added.
The risks
Risks in operating orbital data centers remain, however. Analysts from Morgan Stanley have noted that orbital data centers could face hurdles such as harsh radiation, difficulty of in-orbit maintenance, debris hazards and regulatory issues tied to data governance and space traffic.
Still, tech giants are pursuing orbital data centers given the prospect of nearly limitless solar energy and greater, gigawatt-sized operations in space.
Along with Starcloud and Nvidia’s efforts, several companies have announced space-based data center missions. On Nov. 4, Google unveiled a “moonshot” initiative titled Project Suncatcher, which aims to put solar-powered satellites into space with Google’s tensor processing units. Privately-owned Lonestar Data Holdings is working to put the first-ever commercial lunar data center on the moon’s surface.
Referring to Starcloud’s launch in early November, Nvidia senior director of AI infrastructure Dion Harris said: “From one small data center, we’ve taken a giant leap toward a future where orbital computing harnesses the infinite power of the sun.”