OpenAI on Tuesday announced its biggest product launch since its enterprise rollout. It’s called ChatGPT Gov and was built specifically for U.S. government use.
The Microsoft-backed company bills the new platform as a step beyond ChatGPT Enterprise as far as security. It allows government agencies, as customers, to feed “non-public, sensitive information” into OpenAI’s models while operating within their own secure hosting environments, OpenAI CPO Kevin Weil told reporters during a briefing Monday.
Since the beginning of 2024, OpenAI said that more than 90,000 employees of federal, state and local governments have generated more than 18 million prompts within ChatGPT, using the tech to translate and summarize documents, write and draft policy memos, generate code, and build applications.
The user interface for ChatGPT Gov looks like ChatGPT Enterprise. The main difference is that government agencies will use ChatGPT Gov in their own Microsoft Azure commercial cloud, or Azure Government community cloud, so they can “manage their own security, privacy and compliance requirements,” Felipe Millon, who leads federal sales and go-to-market for OpenAI, said on the call with reporters.
For as long as artificial intelligence has been used by government agencies, it’s faced significant scrutiny due to its potentially harmful ripple effects, especially for vulnerable and minority populations, and data privacy concerns. Police use of AI has led to a number of wrongful arrests and, in California, voters rejected a plan to replace the state’s bail system with an algorithm due to concerns it would increase bias.
An OpenAI spokesperson told CNBC that the company acknowledges there are special considerations for government use of AI, and OpenAI wrote in a blog post Tuesday that the product is subject to its usage policies.
Aaron Wilkowitz, a solutions engineer at OpenAI, showed reporters a demo of a day in the life of a new Trump administration employee, allowing the person to sign into ChatGPT Gov and create a five-week plan for some of their job duties, then analyze an uploaded photo of the same printed-out plan with notes and markings all over it. Wilkowitz also demonstrated how ChatGPT Gov could draft a memo to the legal and compliance department summarizing its own AI-generated job plan and then translate the memo into different languages.
ChatGPT Enterprise, which underpins ChatGPT Gov, is currently going through the Federal Risk and Authorization Management Program, or FedRAMP, and has not yet been accredited for use on nonpublic data. Weil told CNBC it’s a “long process,” adding that he couldn’t provide a timeline.
“I know President Trump is also looking at how we can potentially streamline that, because it’s one way of getting more modern software tooling into the government and helping the government run more efficiently,” Weil said. “So we’re very excited about that.”
But OpenAI’s Millon said ChatGPT Gov will be available in the “near future,” with customers potentially testing and using the product live “within a month.” He said he foresees agencies with sensitive data, such as defense, law enforcement and health care, benefiting most from the product.
When asked if the Trump administration played a role in ChatGPT Gov, Weil said he was in Washington, D.C., for the inauguration and “got to spend a lot of time with folks coming into the new administration.” He added that “the focus is on ensuring that the U.S. wins in AI” and that “our interests are very aligned.”
OpenAI CEO Sam Altman attended the inauguration alongside other tech CEOs and has recently joined the growing tide of industry leaders publicly pronouncing their admiration for President Donald Trump or donating to his inauguration fund. Altman wrote on X that watching Trump “more carefully recently has really changed my perspective on him,” adding that “he will be incredible for the country in many ways.”
A few days before the inauguration, Altman received a letter from U.S. senators expressing concern that he is attempting to “cozy up to the incoming Trump administration” with the aim of avoiding regulation and limiting scrutiny.
Regarding China’s DeepSeek, Weil told reporters the new developments don’t change how OpenAI thinks about its product road map but instead “underscores how important it is that the U.S. wins this race.”
“It’s a super competitive industry, and this is showing that it’s competitive globally, not just within the U.S.,” Weil said. “We’re committed to moving really quickly here. We want to stay ahead.”
U.S. President Donald Trump speaks during a ceremony to posthumously award the Medal of Freedom to Charlie Kirk, in the Rose Garden patio at the White House in Washington, D.C., U.S., Oct. 14, 2025.
Kevin Lamarque | Reuters
U.S. stocks had a rocky day of trading, swinging from highs and lows like the quality of Game of Thrones across its eight seasons.
At its lowest during the session, the S&P 500 fell as much as 1.5%, but picked up and traded positive for most of the day after U.S. Trade Representative Jamieson Greer gave an indication that China’s next trade move could influence the implementation of President Donald Trump’s tariffs.
The optimism in markets fizzled, however, when Trump said he was considering “terminating business with China having to do with Cooking Oil” and other forms of “retribution” because the country has stopped buying U.S. soybeans since May. Investors seemed to take that threat seriously, sending the S&P 500 down 0.2% for the day.
Developments elsewhere, however, were more positive. U.S. Federal Reserve Chair Jerome Powell suggested that the central bank might stop tightening monetary policy with regard to its bond holdings. Furthermore, big banks — bellwethers for economic activity — such as JPMorgan Chase, Citi and Goldman Sachs, beat earnings expectations, suggesting that fundamentals are still sound.
And while Oracle’s turn to AMD’s artificial intelligence chips — hence diversifying from Nvidia graphics processing units — might not be pleasant news for Jensen Huang, spreading out concentration risk could be a positive outcome for investors banking on AI to continue the market rally.
The question, then, is whether Trump will raze the AI-supported market with his tariffs — or if the Magnificent Seven kingdom will stand.
Powell suggests the Fed might stop tightening policy. The U.S. central bank could cease reducing the size of its bond holdings, which would allow liquidity to be maintained in the economy, Powell said in a prepared speech Tuesday.
Oracle to deploy AMD artificial intelligence chips. Oracle will use 50,000 of AMD’s Instinct MI450 chips beginning in the second half of 2026, in a sign that companies are turning to Nvidia’s competitors for their processing needs.
U.S. stocks were mixed. On Tuesday, the S&P 500 and Nasdaq Composite fell but recovered from session lows. The Dow Jones Industrial Average, however, closed in the green. The pan-European Stoxx 600 index dropped 0.37% and touched two-week lows in the session.
[PRO] An attractive European fixed income play. This niche area has “real value,” according to BlackRock’s James Turner, co-head of global fixed income in EMEA. In addition, it offers protection against the risk of interest rate fluctuations.
And finally…
U.S. President Donald Trump gestures as he poses next to a sign before a family photo at a world leaders’ summit on ending the Gaza war, amid a U.S.-brokered prisoner-hostage swap and ceasefire deal between Israel and Hamas, in Sharm el-Sheikh, Egypt, Oct. 13, 2025.
While most agree that U.S. President Donald Trump deserves credit for helping to bring an immediate end to the devastating war between Israel and Hamas, achieving a long-lasting peace is a different matter. Analysts note that detail is scant in Trump’s 20-point peace plan, meaning there are a number of grey areas and room for discontent and disagreement in the near and long-term.
This is particularly salient when it comes to both immediate matters in the peace proposal, such as the demilitarization of Hamas and withdrawal of Israeli forces from Gazan territory it currently controls, to perhaps the biggest bone of contention: a two-state solution for the Israelis and Palestinians.
U.S. Attorney General Pam Bondi speaks during a roundtable on “Antifa,” an anti-fascist movement he designated a domestic “terrorist organization” via executive order on September 22, at the White House in Washington, D.C., Oct. 8, 2025.
Evelyn Hockstein | Reuters
Meta removed a Facebook group page on Tuesday that was allegedly used to “dox and target” U.S. Immigration and Customs Enforcement agents in Chicago after being contacted by the Department of Justice.
Attorney General Pam Bondi revealed the Facebook takedown in an X post, and said that the DOJ “will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.”
A Meta spokesperson confirmed that the tech giant removed the Facebook group page, but declined to comment about its size and the specific details that warranted its removal.
“This Group was removed for violating our policies against coordinated harm,” the Meta spokesperson said in a statement that also referred to the company’s policies pertaining to “Coordinating Harm and Promoting Crime.”
Meta’s removal of the Facebook group page follows similar moves from rivals like Apple and Google, which have recently removed apps that could be used to anonymously report sightings of ICE agents and other law enforcement.
Read more CNBC tech news
Apple took down the ICEBlock app nearly two weeks ago following pressure from Bondi, who said at the time that the app was “designed to put ICE agents at risk just for doing their jobs.”
Apple said at the time in a statement that it removed the ICEBlock app based on information provided by law enforcement about alleged “safety risks.”
Google, which did not maintain the ICEBlock app on its app store, said in October that while the DOJ never contacted the search giant, the company removed “similar apps for violations of our policies.”
ICEBlock creator Joshua Aaron criticized both Apple and the White House in an interview with CNBC, and compared his app to others like Waze, which let drivers report when they see law enforcement officers in order to avoid getting ticketed for speeding.
“This is about our fundamental constitutional rights in this country being stripped away by this administration, and the powers that be who are capitulating to their requests,” Aaron said.
OpenAI’s EMEA startups head Laura Modiano spoke at the Sifted Summit on Wednesday, 8 October.
Nurphoto | Nurphoto | Getty Images
OpenAI on Tuesday announced a council of eight experts who will advise the company and provide insight into how artificial intelligence could affect users’ mental health, emotions and motivation.
The group, which is called the Expert Council on Well-Being and AI, will initially guide OpenAI’s work on its chatbot ChatGPT and its short-form video app Sora, the company said. Through check-ins and recurring meetings, OpenAI said the council will help it define what healthy AI interactions look like.
OpenAI has been expanding its safety controls in recent months as the company has faced mounting scrutiny over how it protects users, particularly minors.
In September, the Federal Trade Commission launched an inquiry into several tech companies, including OpenAI, over how chatbots like ChatGPT could negatively affect children and teenagers. OpenAI is also embroiled in a wrongful death lawsuit from a family who blames ChatGPT for their teenage son’s death by suicide.
Read more CNBC tech news
The company is building an age prediction system that will automatically apply teen-appropriate settings for users under 18, and it launched a series of parental controls late last month. Parents can now get notified if their child is showing signs of acute distress, for instance.
OpenAI said it began informally consulting with members of its new expert council as it was building its parental controls. The company brought on additional experts in psychiatry. psychology and human-computer interaction as it formalized the council, which officially launched with an in-person session last week.
In addition to its expert council, OpenAI said it is also working with researchers and mental health clinicians within the Global Physician Network who will help test ChatGPT and establish company policies.
Here are the members of OpenAI’s Expert Council on Well-Being and AI:
Andrew Przybylski, a professor of human behavior and technology at the University of Oxford.
David Bickham, a research scientist in the Digital Wellness Lab at Boston Children’s Hospital.
David Mohr, the director of Northwestern University’s Center for Behavioral Intervention Technologies.
Mathilde Cerioli, the chief scientist at Everyone.AI, a nonprofit that explores the risks and benefits of AI for children.
Munmun De Choudhury, a professor at Georgia Tech’s School of Interactive Computing.
Dr. Robert Ross, a pediatrician by training and the former CEO of The California Endowment, a nonprofit that aims to expand access to affordable health care.
Dr. Sara Johansen, a clinical assistant professor at Stanford University who founded its Digital Mental Health Clinic.
Tracy Dennis-Tiwary, a professor of psychology at Hunter College.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor