Connect with us

Published

on

There's a water crisis looming. Big Tech and AI could make it worse

Dubai, UNITED ARAB EMIRATES — A global rush for the next wave of generative artificial intelligence is increasing public scrutiny on an often-overlooked but critically important environmental issue: Big Tech’s expanding water footprint.

Tech giants, including the likes of Microsoft and Alphabet-owned Google, have recently reported a substantial upswing in their water consumption and researchers say one of the main culprits is the race to capitalize on the next wave of AI.

Shaolei Ren, a researcher at the University of California, Riverside, published a study in April investigating the resources needed to run buzzy generative AI models, such as OpenAI’s ChatGPT.

Ren and his colleagues found that ChatGPT gulps 500 milliliters of water (roughly the amount of water in a standard 16-ounce bottle) for every 10 to 50 prompts, depending on when and where the AI model is deployed.

Hundreds of millions of monthly users all submitting questions on the popular chatbot quickly illustrates just how “thirsty” AI models can be.

The study’s authors warned that if the growing water footprint of AI models is not sufficiently addressed, the issue could become a major roadblock to the socially responsible and sustainable use of AI in the future.

People take part in a protest called by Uruguay’s Central Union (PIT-CNT) in “defense of water” against the handling of the national authorities with respect to the management of the shortage of drinking water reserves in Montevideo on May 31, 2023.

Eitan Abramovich | Afp | Getty Images

ChatGPT creator OpenAI, part owned by Microsoft, did not respond to a request to comment on the study’s findings.

“In general, the public is getting more knowledgeable and aware of the water issue and if they learn that the Big Tech’s are taking away their water resources and they are not getting enough water, nobody will like it,” Ren told CNBC via videoconference.

“I think we are going to see more clashes over the water usage in the coming years as well, so this type of risk will have to be taken care of by the companies,” he added.

‘A hidden cost’

Data centers are part of the lifeblood of Big Tech — and a lot of water is required to keep the power-hungry servers cool and running smoothly.

For Meta, its these warehouse-scale data centers that generate not only the highest percentage of its water use but also the lion’s share of its energy use and greenhouse gas emissions.

In July, protesters took to the streets of Uruguay’s capital to push back against Google’s plan to build a data center. The proposal sought to use vast quantities of water at a time when the South American country was suffering its worst drought in 74 years.

Google reportedly said at the time the project was still at an exploratory phase and stressed that sustainability remained at the heart of its mission.

With AI, we’re seeing the classic problem with technology in that you have efficiency gains but then you have rebound effects with more energy and more resources being used.

Somya Joshi

Head of division: global agendas, climate and systems at SEI

In Microsoft’s latest environmental sustainability report, the U.S. tech company disclosed that its global water consumption rose by more than a third from 2021 to 2022, climbing to nearly 1.7 billion gallons.

It means that Microsoft’s annual water use would be enough to fill more than 2,500 Olympic-sized swimming pools.

For Google, meanwhile, total water consumption at its data centers and offices came in at 5.6 billion gallons in 2022, a 21% increase on the year before.

Both companies are working to reduce their water footprint and become “water positive” by the end of the decade, meaning that they aim to replenish more water than they use.

Google plans to operate its data centers on carbon-free energy by 2030

It’s notable, however, that their latest water consumption figures were disclosed before the launch of their own respective ChatGPT competitors. The computing power needed to run Microsoft’s Bing Chat and Google Bard could mean significantly higher levels of water use over the coming months.

“With AI, we’re seeing the classic problem with technology in that you have efficiency gains but then you have rebound effects with more energy and more resources being used,” said Somya Joshi, head of division: global agendas, climate and systems at the Stockholm Environment Institute.

“And when it comes to water, we’re seeing an exponential rise in water use just for supplying cooling to some of the machines that are needed, like heavy computation servers, and large-language models using larger and larger amounts of data,” Joshi told CNBC during the COP28 climate summit in the United Arab Emirates.

“So, on one hand, companies are promising to their customers more efficient models … but this comes with a hidden cost when it comes to energy, carbon and water,” she added.

How are tech firms reducing their water footprint?

A spokesperson for Microsoft told CNBC that the company is investing in research to measure the energy and water use and carbon impact of AI, while working on ways to make large systems more efficient.

“AI will be a powerful tool for advancing sustainability solutions, but we need a plentiful clean energy supply globally to power this new technology, which has increased consumption demands,” a spokesperson for Microsoft told CNBC via email.

“We will continue to monitor our emissions, accelerate progress while increasing our use of clean energy to power datacenters, purchasing renewable energy, and other efforts to meet our sustainability goals of being carbon negative, water positive and zero waste by 2030,” they added.

Aerial view of the proposed site of the Meta Platforms Inc. data center outside Talavera de la Reina, Spain, on Monday, July 17, 2023. Meta is planning to build a 1 billion ($1.1 billion) data center which it expects to use about 665 million liters (176 million gallons) of water a year, and up to 195 liters per second during “peak water flow,” according to a technical report.

Paul Hanna | Bloomberg | Getty Images

Separately, a Google spokesperson told CNBC that research shows that while AI computing demand has dramatically increased, the energy needed to power this technology is rising “at a much slower rate than many forecasts have predicted.”

“We are using tested practices to reduce the carbon footprint of workloads by large margins; together these principles can reduce the energy of training a model by up to 100x and emissions by up to 1000x,” the spokesperson said.

“Google data centers are designed, built and operated to maximize efficiency – compared with five years ago, Google now delivers around 5X as much computing power with the same amount of electrical power,” they continued.

“To support the next generation of fundamental advances in AI, our latest TPU v4 [supercomputer] is proven to be one of the fastest, most efficient, and most sustainable ML [machine leanring] infrastructure hubs in the world.”

Continue Reading

Technology

OpenAI wins $200 million U.S. defense contract

Published

on

By

OpenAI wins 0 million U.S. defense contract

OpenAI CEO Sam Altman speaks during the Snowflake Summit in San Francisco on June 2, 2025.

Justin Sullivan | Getty Images News | Getty Images

OpenAI has been awarded a $200 million contract to provide the U.S. Defense Department with artificial intelligence tools.

The department announced the one-year contract on Monday, months after OpenAI said it would collaborate with defense technology startup Anduril to deploy advanced AI systems for “national security missions.”

“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said. It’s the first contract with OpenAI listed on the Department of Defense’s website.

Anduril received a $100 million defense contract in December. Weeks earlier, OpenAI rival Anthropic said it would work with Palantir and Amazon to supply its AI models to U.S. defense and intelligence agencies.

Sam Altman, OpenAI’s co-founder and CEO, said in a discussion with OpenAI board member and former National Security Agency leader Paul Nakasone at a Vanderbilt University event in April that “we have to and are proud to and really want to engage in national security areas.”

OpenAI did not immediately respond to a request for comment.

The Defense Department specified that the contract is with OpenAI Public Sector LLC, and that the work will mostly occur in the National Capital Region, which encompasses Washington, D.C., and several nearby counties in Maryland and Virginia.

Meanwhile, OpenAI is working to build additional computing power in the U.S. In January, Altman appeared alongside President Donald Trump at the White House to announce the $500 billion Stargate project to build AI infrastructure in the U.S.

The new contract will represent a small portion of revenue at OpenAI, which is generating over $10 billion in annualized sales. In March, the company announced a $40 billion financing round at a $300 billion valuation.

In April, Microsoft, which supplies cloud infrastructure to OpenAI, said the U.S. Defense Information Systems Agency has authorized the use of the Azure OpenAI service with secret classified information. 

WATCH: OpenAI hits $10 billion in annual recurring revenue

OpenAI hits $10 billion in annual recurring revenue

Continue Reading

Technology

Amazon Kuiper second satellite launch postponed by ULA due to rocket booster issue

Published

on

By

Amazon Kuiper second satellite launch postponed by ULA due to rocket booster issue

A United Launch Alliance Atlas V rocket is shown on its launch pad carrying Amazon’s Project Kuiper internet network satellites as the vehicle is prepared for launch at the Cape Canaveral Space Force Station in Cape Canaveral, Florida, U.S., April 28, 2025.

Steve Nesius | Reuters

United Launch Alliance on Monday was forced to delay the second flight carrying a batch of Amazon‘s Project Kuiper internet satellites because of a problem with the rocket booster.

With roughly 30 minutes left in the countdown, ULA announced it was scrubbing the launch due to an issue with “an elevated purge temperature” within its Atlas V rocket’s booster engine. The company said it will provide a new launch date at a later point.

“Possible issue with a GN2 purge line that cannot be resolved inside the count,” ULA CEO Tory Bruno said in a post on Bluesky. “We will need to stand down for today. We’ll sort it and be back.”

The launch from Florida’s Space Coast had been set for last Friday, but was rescheduled to Monday at 1:25 p.m. ET due to inclement weather.

Read more CNBC tech news

Amazon in April successfully sent up 27 Kuiper internet satellites into low Earth orbit, a region of space that’s within 1,200 miles of the Earth’s surface. The second voyage will send “another 27 satellites into orbit, bringing our total constellation size to 54 satellites,” Amazon said in a blog post.

Kuiper is the latest entrant in the burgeoning satellite internet industry, which aims to beam high-speed internet to the ground from orbit. The industry is currently dominated by Elon Musk’s Space X, which operates Starlink. Other competitors include SoftBank-backed OneWeb and Viasat.

Amazon is targeting a constellation of more than 3,000 satellites. The company has to meet a Federal Communications Commission deadline to launch half of its total constellation, or 1,618 satellites, by July 2026.

Don’t miss these insights from CNBC PRO

AWS CEO: Lots of opportunity to expand infrastructure globally

Continue Reading

Technology

Google issues apology, incident report for hourslong cloud outage

Published

on

By

Google issues apology, incident report for hourslong cloud outage

Thomas Kurian, CEO of Google Cloud, speaks at a cloud computing conference held by the company in 2019.

Michael Short | Bloomberg | Getty Images

Google apologized for a major outage that the company said was caused by multiple layers of flawed recent updates.

The company released an incident report late on Friday that explained hours of downtime on Thursday. More than 70 Google cloud services stopped working properly across the globe, knocking down or disrupting dozens of third-party services, including Cloudflare, OpenAI and Shopify. Gmail, Google Calendar, Google Drive, Google Meet and other first-party products also malfunctioned.

“We deeply apologize for the impact this outage has had,” Google wrote in the incident report. “Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward.”

Thomas Kurian, CEO of Google’s cloud unit, also posted about the outage in an X post on Thursday, saying “we regret the disruption this caused our customers.”

Google in May added a new feature to its “quota policy checks” for evaluating automated incoming requests, but the new feature wasn’t immediately tested in real-world situations, the company wrote in the incident report. As a result, the company’s systems didn’t know how to properly handle data from the new feature, which included blank entries. Those blank entries were then sent out to all Google Cloud data center regions, which prompted the crashes, the company wrote.

Engineers figured out the issue in 10 minutes, according to the company. However, the entire incident went on for seven hours after that, with the crash leading to an overload in some larger regions.

As it released the feature, Google did not use feature flags, an increasingly common industry practice that allows for slow implementation to minimize impact if problems occur. Feature flags would have caught the issue before the feature became widely available, Google said.

Going forward, Google will change its architecture so if one system fails, it can still operate without crashing, the company said. Google said it will also audit all systems and improve its communications “both automated and human, so our customers get the information they need asap to react to issues.” 

— CNBC’s Jordan Novet contributed to this report.

WATCH: Google buyouts highlight tech’s cost-cutting amid AI CapEx boom

Google buyouts highlight tech's cost-cutting amid AI CapEx boom

Continue Reading

Trending