Connect with us

Published

on

A large hallway with supercomputers inside a server room data center. 

Luza Studios | E+ | Getty Images

Malaysia is emerging as a data center powerhouse in Southeast Asia and the continent more broadly as demand surges for cloud computing and artificial intelligence.

Over the past few years, the country has attracted billions of dollars in data center investments, including from tech giants like Google, Nvidia and Microsoft

Much of the investments have been in the small city of Johor Bahru, located on the border with Singapore, according to James Murphy, APAC managing director at data center intelligence company DC Byte.

“It looks like in the space of a couple of years, [Johor Bahru] alone will overtake Singapore to become the largest market in Southeast Asia from a base of essentially zero just two years ago,” he said. 

Johor Bahru was named as the fastest growing market within Southeast Asia in DC Byte’s 2024 Global Data Centre Index

Princeton Digital Group says its Johor data center campus will come into service in 6 weeks

The report said the city has 1.6 gigawatts of total data center supply, including projects under construction, committed to or in the early stages of planning. Data center capacity is typically measured by the amount of electricity it consumes.

If all planned capacity comes online across Asia, Malaysia will only be surpassed by the larger countries of Japan and India. Until then, Japan followed by Singapore currently lead the region in terms of live data center capacity. 

The index did not provide a detailed breakdown of data center capacity in China. 

Shifting demand 

Blackstone's Nadeem Meghji: Data centers are the most exciting asset class across our entire firm

Booming demand for AI services also requires specialized data centers to house the large amounts of data and computational power required to train and deploy AI models.

While many of these AI data centers will be built in established markets such as Japan, Murphy said emerging markets will also attract investments due to favorable characteristics. 

AI data centers require a lot of space, energy and water for cooling. Therefore, emerging markets such as Malaysia — where energy and land are cheap — provide advantages over smaller city-states like Hong Kong and Singapore, where such resources are limited.

Spillover from Singapore

Singtel discusses its data center expansion plans

Thus, a lot of investment and planned capacity has been redirected from Singapore to the bordering Johor Bahru over the years.

Singapore recently changed its tune and laid out a roadmap to grow its data center capacity by 300 MW on the condition more projects meet green-friendly efficiency and renewable energy standards. Such efforts have attracted investments from companies like Microsoft and Google. 

Still, Singapore is too small for wide-scale green power generation, thus there remain a lot of limitations on the market, said DC Byte’s Murphy. 

Resource strains

Data center liquid cooling is accelerating and it's accelerating now, says Vertiv CEO

Local officials are increasingly concerned about the extent of this power usage, as quoted in a recent report from The Straits Times.

Johor Bahru city council mayor Mohd Noorazam Osman reportedly said data center investments should not compromise local resource needs, given the city’s challenges with its water and power supply.

Meanwhile, a Johor Investment, Trade, and Consumer Affairs Committee official told ST that the state government would implement more guidelines on green energy use for data centers in June.

Continue Reading

Technology

OpenAI wins $200 million U.S. defense contract

Published

on

By

OpenAI wins 0 million U.S. defense contract

OpenAI CEO Sam Altman speaks during the Snowflake Summit in San Francisco on June 2, 2025.

Justin Sullivan | Getty Images News | Getty Images

OpenAI has been awarded a $200 million contract to provide the U.S. Defense Department with artificial intelligence tools.

The department announced the one-year contract on Monday, months after OpenAI said it would collaborate with defense technology startup Anduril to deploy advanced AI systems for “national security missions.”

“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said. It’s the first contract with OpenAI listed on the Department of Defense’s website.

Anduril received a $100 million defense contract in December. Weeks earlier, OpenAI rival Anthropic said it would work with Palantir and Amazon to supply its AI models to U.S. defense and intelligence agencies.

Sam Altman, OpenAI’s co-founder and CEO, said in a discussion with OpenAI board member and former National Security Agency leader Paul Nakasone at a Vanderbilt University event in April that “we have to and are proud to and really want to engage in national security areas.”

OpenAI did not immediately respond to a request for comment.

The Defense Department specified that the contract is with OpenAI Public Sector LLC, and that the work will mostly occur in the National Capital Region, which encompasses Washington, D.C., and several nearby counties in Maryland and Virginia.

Meanwhile, OpenAI is working to build additional computing power in the U.S. In January, Altman appeared alongside President Donald Trump at the White House to announce the $500 billion Stargate project to build AI infrastructure in the U.S.

The new contract will represent a small portion of revenue at OpenAI, which is generating over $10 billion in annualized sales. In March, the company announced a $40 billion financing round at a $300 billion valuation.

In April, Microsoft, which supplies cloud infrastructure to OpenAI, said the U.S. Defense Information Systems Agency has authorized the use of the Azure OpenAI service with secret classified information. 

WATCH: OpenAI hits $10 billion in annual recurring revenue

OpenAI hits $10 billion in annual recurring revenue

Continue Reading

Technology

Amazon Kuiper second satellite launch postponed by ULA due to rocket booster issue

Published

on

By

Amazon Kuiper second satellite launch postponed by ULA due to rocket booster issue

A United Launch Alliance Atlas V rocket is shown on its launch pad carrying Amazon’s Project Kuiper internet network satellites as the vehicle is prepared for launch at the Cape Canaveral Space Force Station in Cape Canaveral, Florida, U.S., April 28, 2025.

Steve Nesius | Reuters

United Launch Alliance on Monday was forced to delay the second flight carrying a batch of Amazon‘s Project Kuiper internet satellites because of a problem with the rocket booster.

With roughly 30 minutes left in the countdown, ULA announced it was scrubbing the launch due to an issue with “an elevated purge temperature” within its Atlas V rocket’s booster engine. The company said it will provide a new launch date at a later point.

“Possible issue with a GN2 purge line that cannot be resolved inside the count,” ULA CEO Tory Bruno said in a post on Bluesky. “We will need to stand down for today. We’ll sort it and be back.”

The launch from Florida’s Space Coast had been set for last Friday, but was rescheduled to Monday at 1:25 p.m. ET due to inclement weather.

Read more CNBC tech news

Amazon in April successfully sent up 27 Kuiper internet satellites into low Earth orbit, a region of space that’s within 1,200 miles of the Earth’s surface. The second voyage will send “another 27 satellites into orbit, bringing our total constellation size to 54 satellites,” Amazon said in a blog post.

Kuiper is the latest entrant in the burgeoning satellite internet industry, which aims to beam high-speed internet to the ground from orbit. The industry is currently dominated by Elon Musk’s Space X, which operates Starlink. Other competitors include SoftBank-backed OneWeb and Viasat.

Amazon is targeting a constellation of more than 3,000 satellites. The company has to meet a Federal Communications Commission deadline to launch half of its total constellation, or 1,618 satellites, by July 2026.

Don’t miss these insights from CNBC PRO

AWS CEO: Lots of opportunity to expand infrastructure globally

Continue Reading

Technology

Google issues apology, incident report for hourslong cloud outage

Published

on

By

Google issues apology, incident report for hourslong cloud outage

Thomas Kurian, CEO of Google Cloud, speaks at a cloud computing conference held by the company in 2019.

Michael Short | Bloomberg | Getty Images

Google apologized for a major outage that the company said was caused by multiple layers of flawed recent updates.

The company released an incident report late on Friday that explained hours of downtime on Thursday. More than 70 Google cloud services stopped working properly across the globe, knocking down or disrupting dozens of third-party services, including Cloudflare, OpenAI and Shopify. Gmail, Google Calendar, Google Drive, Google Meet and other first-party products also malfunctioned.

“We deeply apologize for the impact this outage has had,” Google wrote in the incident report. “Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward.”

Thomas Kurian, CEO of Google’s cloud unit, also posted about the outage in an X post on Thursday, saying “we regret the disruption this caused our customers.”

Google in May added a new feature to its “quota policy checks” for evaluating automated incoming requests, but the new feature wasn’t immediately tested in real-world situations, the company wrote in the incident report. As a result, the company’s systems didn’t know how to properly handle data from the new feature, which included blank entries. Those blank entries were then sent out to all Google Cloud data center regions, which prompted the crashes, the company wrote.

Engineers figured out the issue in 10 minutes, according to the company. However, the entire incident went on for seven hours after that, with the crash leading to an overload in some larger regions.

As it released the feature, Google did not use feature flags, an increasingly common industry practice that allows for slow implementation to minimize impact if problems occur. Feature flags would have caught the issue before the feature became widely available, Google said.

Going forward, Google will change its architecture so if one system fails, it can still operate without crashing, the company said. Google said it will also audit all systems and improve its communications “both automated and human, so our customers get the information they need asap to react to issues.” 

— CNBC’s Jordan Novet contributed to this report.

WATCH: Google buyouts highlight tech’s cost-cutting amid AI CapEx boom

Google buyouts highlight tech's cost-cutting amid AI CapEx boom

Continue Reading

Trending