NEW CARLISLE, Indiana — A year ago, it was farmland. Now, the 1,200-acre site near Lake Michigan is home to one of the largest operational AI data centers in the world. It’s called Project Rainier, and it’s the spot where Amazon is training frontier artificial intelligence models entirely on its own chips.
Amazon and its competitors have pledged more than $1 trillion towards AI data center projects that are so ambitious, skeptics wonder if there’s enough money, energy and community support to get them off the ground.
OpenAI has Stargate — its name for a slate of mammoth AI data centers that it plans to develop. Rainier is Amazon’s $11 billion answer. And it’s not a concept, but a cluster that’s already online.
The complex was built exclusively to train and run models from Anthropic, the AI startup behind Claude, and one of Amazon’s largest cloud customers and AI partners.
“This is not some future project that we’ve talked about that maybe comes alive,” Matt Garman, CEO of Amazon Web Services, told CNBC in an interview at Amazon’s Seattle headquarters. “This is running and training their models today.”
Tech’s megacaps are all racing to build supercomputing sites to meet an expected explosion in demand. Meta is planning a 2-gigawatt Hyperion site in Louisiana, while Google parent Alphabet just broke ground in West Memphis, Arkansas, across the Mississippi River from Elon Musk’s Colossus data center for his startup xAI.
In the span of a month, OpenAI committed to 33 gigawatts of new compute, a buildout CEO Sam Altman says represents $1.4 trillion in upcoming obligations, with partners including Nvidia, Advanced Micro Devices, Broadcom and Oracle.
Amazon is already delivering, thanks to decades of experience in large-scale logistics. From massive fulfillment centers and logistics hubs to AWS data centers and its HQ2 project, Amazon has deep and close relationships with state and local officials and a playbook that’s now being used to get AI infrastructure set up in record time.
“These deals all sound great on paper,” said Mike Krieger, chief product officer at Anthropic, which has raised billions of dollars from Amazon. “But they only materialize when they’re actually racked and loaded and usable by the customer. And Amazon is incredible at that.”
The public unveiling of Rainier comes a day ahead of Amazon’s third-quarter earnings report. Investors will be listening closely for commentary on capital expenditures, but they also want to know how quickly capex projects will convert into revenue, and eventually, profit.
On Tuesday, Amazon announced 14,000 layoffs as part of a broader push to flatten management and reallocate resources to priority areas like AI and the company’s Trainium chips.
The genesis of the Rainier complex dates back to the spring of 2023.
Roughly six months after ChatGPT launched, Amazon started scouting land in rural Indiana, working with American Electric Power through its Indiana Michigan Power subsidiary. A year later, it signed an $11 billion agreement with Indiana, the largest capital investment in the state’s history.
Construction began in September of last year and, as of this month, seven buildings are already online, with two more campuses underway. The full site will eventually span 30 buildings and draw more than 2.2 gigawatts of electricity, enough to power more than 1.6 million homes.
Indiana Michigan Power is in the final stages of acquiring a natural gas plant in Oregon, Ohio, that would make up 15% of the utility’s power by the end of 2026 and help power the AWS AI data center in New Carlisle, Indiana.
Indiana Michigan Power
Josh Sallabedra, who’s spent 14 years building data centers for Amazon, is now the Indiana site lead. He relocated from the West Coast last year to oversee the project. Sallabedra brought on four general contractors to accelerate the timeline and says he’s never seen the company move this fast.
“That’s the customer demand right now,” Sallabedra told CNBC. “As we saw AI and machine learning coming, we changed to a different building type.”
While some tech giants are throwing up temporary structures to move faster — Meta is building under giant tents in Ohio — Amazon took a more deliberate path. Midway through construction, it updated its facility design to speed up deployment.
“It’s not just fast,” said Garman. “It is secure and reliable AWS infrastructure … an industrial, enterprise-scale data center.”
Or, as Garman described it, “Cornfields to data centers, almost overnight.”
‘Difficult to keep losing farmland’
The site still feels raw. Workers in safety vests move between trailers as steel beams rise in the distance. Convoys of pickup trucks kick up dust past unfinished warehouse shells. From the security gate, a line of streetlamps stretches toward the data center core, where lifts haul crates packed with chips.
This quiet stretch of rural Indiana, dotted with grain silos, transmission lines, and the occasional barn, has become a magnet for ambitious infrastructure projects. General Motors and Samsung are jointly building a $3.5 billion electric vehicle battery plant next door. At peak, more than 4,000 construction workers have been showing up each day in a town with a population of just 1,900.
AWS site lead Josh Sallabedra with MacKenzie Sigalos
Katie Tarasov
Locals don’t necessarily love the trend.
“It’s just difficult to keep losing farmland,” said Marcy Kauffman, president of New Carlisle’s town council. “And this took a lot of farmland.”
Dan Caruso, a longtime resident of the area, worries that this is just the beginning.
“My friends tried to tell me, ‘You can’t let them come in, because once they get their toe in there, they’ll want more,'” Caruso said. “And that’s exactly what happened.”
Indiana Michigan Power says peak power demand will more than double by the end of the decade, raising questions about household utility bills. One report found that monthly electricity bills in neighborhoods near these new types of sites are 267% higher than five years ago.
And expansion isn’t slowing anytime soon.
“We’re rapidly adding new capacity all over the place,” Garman said. “I don’t know that we’ll be done ever. We’re going to continue to build as our customers need more capacity.”
Rainier’s seven data center buildings are packed wall-to-wall with Trainium 2, Amazon’s custom-built chips. Nvidia’s market-leading graphics processing units are nowhere to be found. Amazon claims this is the largest known deployment of non-Nvidia compute anywhere in the world.
“They’re already running about 500,000 chips in Indiana today,” Garman said. “And in fact, it’s going so well that they’ve actually doubled down on that order.” Amazon expects the number to reach a million by the end of the year.
AWS showed CNBC its Trainium 2 chips that fill its AI data center in New Carlisle, Indiana, on October 8, 2025.
Erin Black
Trainium 3, developed in collaboration with Anthropic, is set to launch in the next few months.
It’s the latest example of the tightening bond between the two companies. Anthropic’s primary infrastructure runs on AWS, and it’s one of the first major AI labs to train models on Amazon’s custom silicon. Amazon has invested $8 billion in the startup as part of its broader AI strategy.
While Trainium can’t match Nvidia’s GPUs in raw performance, AWS says its technology offers greater density and efficiency, packing more chips into each data center to deliver higher aggregate compute while reducing power and cooling costs.
Amazon and Anthopic have co-designed silicon based on real-world training demands. Garman and Krieger both told CNBC that Anthropic provided direct input to speed up training, cut latency and improve energy efficiency.
With Trainium 3, one major goal is to better support frontier models.
“It gives better performance, it gives better latency characteristics, it gets better power consumption per flop,” Garman said. “That will be deployed inside of Indiana. It’ll be deployed in many of our other data centers all around the world.”
Prasad Kalyanaraman, vice president of infrastructure services at AWS, said it’s critical to be “able to control the stack all the way from the lower layers of the infrastructure” in order to “build the right set of capabilities that these model providers want.”
CNBC’s MacKenzie Sigalos spoke to AWS CEO Matt Garman about Project Rainier in Seattle, Washington, on October 17, 2025.
Michael Crowe
Anthropic is moving at a breakneck pace, and burning mounds of cash in the process, as it races to keep up with OpenAI and others.
The company’s annual revenue run rate is nearing $7 billion. Its Claude chatbot powers more than 300,000 businesses, a 300-fold increase over the last two years. The number of large enterprise customers, each producing more than $100,000 in annual revenue, has jumped nearly sevenfold in just a year.
Claude Code, Anthropic’s new agentic coding assistant, generated $500 million in annualized revenue within its first two months.
But Anthropic isn’t counting exclusively on Amazon as it carves its future path. Last week, the company announced a partnership with Alphabet that gives Anthropic access to up to 1 million of Google’s custom-designed Tensor Processing Units, or TPUs. The deal is worth tens of billions of dollars,
Anthropic had already received funding from Google, and Krieger said the company needs all the processing power it can get.
“There is such demand for our models,” said Krieger, “that I think the only way we would have been able to serve as much as we’ve been able to serve so far this year is this multi-chip strategy.”
Garman is well aware of the multi-cloud and multi-chip efforts, and said Amazon has no plans to do anything drastic, like bidding to buy Anthropic.
“We love the partnership as it is,” he said.
— CNBC’s Katie Tarasov and Erin Black contributed to this report.
Sam Altman, CEO of OpenAI, attends the annual Allen and Co. Sun Valley Media and Technology Conference at the Sun Valley Resort in Sun Valley, Idaho, on July 8, 2025.
David A. Grogan | CNBC
OpenAI on Wednesday announced two reasoning models that developers can use to classify a range of online safety harms on their platforms.
The artificial intelligence models are called gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, and their names reflect their sizes. They are fine-tuned, or adapted, versions of OpenAI’s gpt-oss models, which the company announced in August.
OpenAI is introducing them as so-called open-weight models, which means their parameters, or the elements that improve the outputs and predictions during training, are publicly available. Open-weight models can offer transparency and control, but they are different from open-source models, whose full source code becomes available for users to customize and modify.
Organizations can configure the new models to their specific policy needs, OpenAI said. And since they are reasoning models that show their work, developers will have more direct insight into how they arrive at a particular output.
For instance, a product reviews site could develop a policy and use gpt-oss-safeguard models to screen reviews that might be fake, OpenAI said. Similarly, a video game discussion forum could classify posts that discuss cheating.
OpenAI developed the models in partnership with Robust Open Online Safety Tools, or ROOST, an organization dedicated to building safety infrastructure for AI. Discord and SafetyKit also helped test the models. They are initially available in a research preview, and OpenAI said it will seek feedback from researchers and members of the safety community.
As part of the launch, ROOST is establishing a model community for researchers and practitioners that are using AI models in an effort to protect online spaces.
The announcement could help OpenAI placate some critics who have accused the startup of commercializing and scaling too quickly at the expense of AI ethics and safety. The startup is valued at $500 billion, and its consumer chatbot, ChatGPT, has surpassed 800 million weekly active users.
On Tuesday, OpenAI said it’s completed its recapitalization, cementing its structure as a nonprofit with a controlling stake in its for-profit business. OpenAI was founded in 2015 as a nonprofit lab, but has emerged as the most valuable U.S. tech startup in the years since releasing ChatGPT in late 2022.
“As AI becomes more powerful, safety tools and fundamental safety research must evolve just as fast — and they must be accessible to everyone,” ROOST President Camille François, said in a statement.
Eligible users can download the model weights on Hugging Face, OpenAI said.
Fiserv‘s stock plummeted 44% Wednesday and headed for its worst day ever after the fintech company cut its earnings outlook and shook up some of its leadership team.
“Our current performance is not where we want it to be nor where our stakeholders expect it to be,” wrote CEO Mike Lyons in a release.
For the full year, Fiserv now expects adjusted earnings of $8.50 to $8.60 a share for the year, down from a previous forecast of $10.15 and $10.30. Revenues are expected to grow 3.5% to 4%, versus a prior estimate of 10%.
Adjusted earnings came in at $2.04 per share, falling short of the LSEG estimate of $2.64. Revenues rose about 1% from a year ago to $4.92 billion, missing the $5.36 billion forecast. Net income grew to $792 million from $564 million in the year-ago period.
Along with the results, Fiserv announced a slew of executive and board changes.
Read more CNBC tech news
Beginning in December, operating chief Takis Georgakopoulos will serve as co-president with Dhivya Suryadevara, recent CEO of Optum Financial Services and Optum Insight at UnitedHealth Group. Fiserv also promoted Paul Todd to finance chief.
“We also have opportunities in front of us to improve our results and execution, and I am confident that these are the right leaders to help guide Fiserv to long-term success,” Lyons wrote in a separate release.
Fiserv also announced that Gordon Nixon, Céline Dufétel and Gary Shedlin would join its board at the beginning of 2026, with Nixon serving as independent chairman of the board. Shedlin is slated to lead the audit committee.
The Milwaukee, Wisconsin-based company also announced an action plan that Lyons said would better situate the company to “drive sustainable, high-quality growth” and reach its “full potential.”
Fiserv said it will move its stock from the NYSE to the Nasdaq next month, where it will trade under the ticker symbol “FISV.”
Fiserv did not immediately respond to CNBC’s request for comment.
Character.AI on Wednesday announced that it will soon shut off the ability for minors to have free-ranging chats, including romantic and therapeutic conversations, with the startup’s artificial intelligence chatbots.
The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18.
Last year, 14-year-old Sewell Setzer III, committed suicide after forming sexual relationships with chatbots on Character.AI’s app. Many AI developers, including OpenAI and Facebook-parent Meta, have come under scrutiny after users have committed suicide or died after forming relationships with chatbots.
As part of its safety initiatives, Character.AI said on Wednesday that it will limit users under 18 to two hours of open-ended chats per day, and will eliminate those types of conversations for minors by Nov. 25.
“This is a bold step forward, and we hope this raises the bar for everybody else,” Character.AI CEO Karandeep Anand told CNBC.
Character.AI introduced changes to prevent minors from engaging in sexual dialogues with its chatbots in October 2024. The same day, Sewell’s family filed a wrongful death lawsuit against the company.
To enforce the policy, the company said it’s rolling out an age assurance function that will use first-party and third-party software to monitor a user’s age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification.
In 2024, Character.AI’s founders and certain members of its research team joined Google DeepMind, the company’s AI unit DeepMind. It’s one of a number of such deals announced by leading tech companies to speed their development of AI products and services. The agreement called for Character.AI to provide Google with a non-exclusive license for its current large language model, or LLM, technology.
Since Anand took over as CEO in June, 10 months after the Google deal, Character.AI has added more features to diversify its offering from chatbot conversations. Those features include a feed for watching AI-generated videos as well as storytelling and roleplay formats.
Although Character.AI will no longer allow teenagers to engage in open-ended conversations on its app, those users will still have access to the app’s other offerings, said Anand, who was previously an executive at Meta.
Of the startup’s roughly 20 million monthly active users, about 10% are under 18. Anand said that percentage has declined as the app has shifted its focus toward storytelling and roleplaying.
The app makes money primarily through advertising and a $10 monthly subscription. Character.AI is on track to end the year with a run rate of $50 million, Anand said.
Additionally, the company on Wednesday announced that it will establish and fund an independent AI Safety Lab dedicated to safety research for AI entertainment. Character.AI didn’t say how much it will provide in funding, but the startup said it’s inviting other companies, academics, researchers and policy makers to join the nonprofit effort.
Regulatory pressure
Character.AI is one of many AI chatbot companies facing regulatory scrutiny on the matter of teens and AI companions.
In September, the Federal Trade Commission issued an order to seven companies including, Character.AI’s parent, as well as Alphabet, Meta, OpenAI and Snap, to understand the potential effects on children and teenagers.
On Tuesday, Senators Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn, announced legislation to ban AI chatbot companions for minors. California Gov. Gavin Newsom signed a law earlier this month requiring chatbots to disclose they are AI and tell minors to take a break every three hours.
Rival Meta, which also offers AI chatbots, announced safety features in October that will allow parents to see and manage how their teenagers are interacting with AI characters on the company’s platforms. Parents have the option to turn off one-on-one chats with AI characters completely and can block specific AI characters.
The matter of sexualized conversations with AI chatbots has come into focus as tech companies announce different approaches to dealing with the issue.
Earlier this month, Sam Altman announced that OpenAI would allow adult users to engage in erotica with ChatGPT later this year, saying that his company is “not the elected moral police of the world.”
Microsoft AI CEO Mustafa Suleyman said last week that the software company will not provide “simulated erotica,” describing sexbots as “very dangerous.” Microsoft is a key investor and partner to OpenAI.
The race to develop more realistic human-like AI companions has been growing in Silicon Valley since ChatGPT’s launch in late 2022. While some people are creating deep connections with AI characters, the speedy development presents ethical and safety concerns, especially for children and teenagers.
“I have a six-year-old as well, and I want to make sure that she grows up in a safe environment with AI,” Anand said.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.