Connect with us

Published

on

OpenAI on Tuesday announced its biggest product launch since its enterprise rollout. It’s called ChatGPT Gov and was built specifically for U.S. government use.

The Microsoft-backed company bills the new platform as a step beyond ChatGPT Enterprise as far as security. It allows government agencies, as customers, to feed “non-public, sensitive information” into OpenAI’s models while operating within their own secure hosting environments, OpenAI CPO Kevin Weil told reporters during a briefing Monday.

Since the beginning of 2024, OpenAI said that more than 90,000 employees of federal, state and local governments have generated more than 18 million prompts within ChatGPT, using the tech to translate and summarize documents, write and draft policy memos, generate code, and build applications.

The user interface for ChatGPT Gov looks like ChatGPT Enterprise. The main difference is that government agencies will use ChatGPT Gov in their own Microsoft Azure commercial cloud, or Azure Government community cloud, so they can “manage their own security, privacy and compliance requirements,” Felipe Millon, who leads federal sales and go-to-market for OpenAI, said on the call with reporters.

For as long as artificial intelligence has been used by government agencies, it’s faced significant scrutiny due to its potentially harmful ripple effects, especially for vulnerable and minority populations, and data privacy concerns. Police use of AI has led to a number of wrongful arrests and, in California, voters rejected a plan to replace the state’s bail system with an algorithm due to concerns it would increase bias.

An OpenAI spokesperson told CNBC that the company acknowledges there are special considerations for government use of AI, and OpenAI wrote in a blog post Tuesday that the product is subject to its usage policies.

Aaron Wilkowitz, a solutions engineer at OpenAI, showed reporters a demo of a day in the life of a new Trump administration employee, allowing the person to sign into ChatGPT Gov and create a five-week plan for some of their job duties, then analyze an uploaded photo of the same printed-out plan with notes and markings all over it. Wilkowitz also demonstrated how ChatGPT Gov could draft a memo to the legal and compliance department summarizing its own AI-generated job plan and then translate the memo into different languages.

ChatGPT Enterprise, which underpins ChatGPT Gov, is currently going through the Federal Risk and Authorization Management Program, or FedRAMP, and has not yet been accredited for use on nonpublic data. Weil told CNBC it’s a “long process,” adding that he couldn’t provide a timeline.

“I know President Trump is also looking at how we can potentially streamline that, because it’s one way of getting more modern software tooling into the government and helping the government run more efficiently,” Weil said. “So we’re very excited about that.”

But OpenAI’s Millon said ChatGPT Gov will be available in the “near future,” with customers potentially testing and using the product live “within a month.” He said he foresees agencies with sensitive data, such as defense, law enforcement and health care, benefiting most from the product.

When asked if the Trump administration played a role in ChatGPT Gov, Weil said he was in Washington, D.C., for the inauguration and “got to spend a lot of time with folks coming into the new administration.” He added that “the focus is on ensuring that the U.S. wins in AI” and that “our interests are very aligned.”

OpenAI CEO Sam Altman attended the inauguration alongside other tech CEOs and has recently joined the growing tide of industry leaders publicly pronouncing their admiration for President Donald Trump or donating to his inauguration fund. Altman wrote on X that watching Trump “more carefully recently has really changed my perspective on him,” adding that “he will be incredible for the country in many ways.”

A few days before the inauguration, Altman received a letter from U.S. senators expressing concern that he is attempting to “cozy up to the incoming Trump administration” with the aim of avoiding regulation and limiting scrutiny.

Regarding China’s DeepSeek, Weil told reporters the new developments don’t change how OpenAI thinks about its product road map but instead “underscores how important it is that the U.S. wins this race.”

“It’s a super competitive industry, and this is showing that it’s competitive globally, not just within the U.S.,” Weil said. “We’re committed to moving really quickly here. We want to stay ahead.”

Continue Reading

Technology

SoftBank to acquire chip designer Ampere in $6.5 billion deal

Published

on

By

SoftBank to acquire chip designer Ampere in .5 billion deal

The logo of Japanese company SoftBank Group is seen outside the company’s headquarters in Tokyo on January 22, 2025. 

Kazuhiro Nogi | Afp | Getty Images

SoftBank Group said Wednesday that it will acquire Ampere Computing, a startup that designed an Arm-based server chip, for $6.5 billion. The company expects the deal to close in the second half of 2025, according to a statement.

Carlyle Group and Oracle both have committed to selling their stakes in Ampere, SoftBank said.

Ampere will operate as an independent subsidiary and will keep its headquarters in Santa Clara, California, the statement said.

“Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision, and deepens our commitment to AI innovation in the United States,” SoftBank Group Chairman and CEO Masayoshi Son was quoted as saying in the statement.

The startup has 1,000 semiconductor engineers, SoftBank said in a separate statement.

Chips that use Arm’s instruction set represent an alternative to chips based on the x86 architecture, which Intel and AMD sell. Arm-based chips often consume less energy. Ampere’s founder and CEO, Renee James, established the startup in 2017 after 28 years at Intel, where she rose to the position of president.

Leading cloud infrastructure provider Amazon Web Services offers Graviton Arm chip for rent that have become popular among large customers. In October, Microsoft started selling access to its own Cobalt 100 Arm-based cloud computing instances.

This is breaking news. Please refresh for updates.

Continue Reading

Technology

Nvidia’s Huang says faster chips are the best way to reduce AI costs

Published

on

By

Nvidia's Huang says faster chips are the best way to reduce AI costs

Nvidia CEO Jensen Huang introduces new products as he delivers the keynote address at the GTC AI Conference in San Jose, California, on March 18, 2025.

Josh Edelson | AFP | Getty Images

At the end of Nvidia CEO Jensen Huang’s unscripted two-hour keynote on Tuesday, his message was clear: Get the fastest chips that the company makes.

Speaking at Nvidia’s GTC conference, Huang said that questions clients have about the cost and return on investment the company’s graphics processors, or GPUs, will go away with faster chips that can be digitally sliced and used to serve artificial intelligence to millions of people at the same time.

“Over the next 10 years, because we could see improving performance so dramatically, speed is the best cost-reduction system,” Huang said in a meeting with journalists shortly after his GTC keynote.

The company dedicated 10 minutes during Huang’s speech to explain the economics of faster chips for cloud providers, complete with Huang doing envelope math out loud on each chip’s cost-per-token, a measure of how much it costs to create one unit of AI output.

Huang told reporters that he presented the math because that’s what’s on the mind of hyperscale cloud and AI companies.

The company’s Blackwell Ultra systems, coming out this year, could provide data centers 50 times more revenue than its Hopper systems because it’s so much faster at serving AI to multiple users, Nvidia says. 

Investors worry about whether the four major cloud providers — Microsoft, Google, Amazon and Oracle — could slow down their torrid pace of capital expenditures centered around pricey AI chips. Nvidia doesn’t reveal prices for its AI chips, but analysts say Blackwell can cost $40,000 per GPU.

Already, the four largest cloud providers have bought 3.6 million Blackwell GPUs, under Nvidia’s new convention that counts each Blackwell as 2 GPUs. That’s up from 1.3 million Hopper GPUs, Blackwell’s predecessor, Nvidia said Tuesday. 

The company decided to announce its roadmap for 2027’s Rubin Next and 2028’s Feynman AI chips, Huang said, because cloud customers are already planning expensive data centers and want to know the broad strokes of Nvidia’s plans. 

“We know right now, as we speak, in a couple of years, several hundred billion dollars of AI infrastructure” will be built, Huang said. “You’ve got the budget approved. You got the power approved. You got the land.”

Huang dismissed the notion that custom chips from cloud providers could challenge Nvidia’s GPUs, arguing they’re not flexible enough for fast-moving AI algorithms. He also expressed doubt that many of the recently announced custom AI chips, known within the industry as ASICs, would make it to market.

“A lot of ASICs get canceled,” Huang said. “The ASIC still has to be better than the best.”

Huang said his is focus on making sure those big projects use the latest and greatest Nvidia systems.

“So the question is, what do you want for several $100 billion?” Huang said.

WATCH: CNBC’s full interview with Nvidia CEO Jensen Huang

Watch CNBC's full interview with Nvidia CEO Jensen Huang

Continue Reading

Technology

Microsoft announces new HR executive, company veteran Amy Coleman

Published

on

By

Microsoft announces new HR executive, company veteran Amy Coleman

Microsoft’s Amy Coleman (L) and Kathleen Hogan (R).

Source: Microsoft

Microsoft said Wednesday that company veteran Amy Coleman will become its new executive vice president and chief people officer, succeeding Kathleen Hogan, who has held the position for the past decade.

Hogan will remain an executive vice president but move to a newly established Office of Strategy and Transformation, which is an expansion of the office of the CEO. She will join Microsoft’s group of top executives, reporting directly to CEO Satya Nadella.

Coleman is stepping into a major role, given that Microsoft is among the largest employers in the U.S., with 228,000 total employees as of June 2024. She has worked at the company for more than 25 years over two stints, having first joined as a compensation manager in 1996.

Hogan will remain on the senior leadership team.

“Amy has led HR for our corporate functions across the company for the past six years, following various HR roles partnering across engineering, sales, marketing, and business development spanning 25 years,” Nadella wrote in a memo to employees.

“In that time, she has been a trusted advisor to both Kathleen and to me as she orchestrated many cross-company workstreams as we evolved our culture, improved our employee engagement model, established our employee relations team, and drove enterprise crisis response for our people,” he wrote.

Hogan arrived at Microsoft in 2003 after being a development manager at Oracle and a partner at McKinsey. Under Hogan, some of Microsoft’s human resources practices evolved. She has emphasized the importance of employees having a growth mindset instead of a fixed mindset, drawing on concepts from psychologist Carol Dweck.

“We came up with some big symbolic changes to show that we really were serious about driving culture change, from changing the performance-review system to changing our all-hands company meeting, to our monthly Q&A with the employees,” Hogan said in a 2019 interview with Business Insider.

Hogan pushed for managers to evaluate the inclusivity of employees and oversaw changes in the handling of internal sexual harassment cases.

Coleman had been Microsoft’s corporate vice president for human resources and corporate functions for the past four years. In that role, she was responsible for 200 HR workers and led the development of Microsoft’s hybrid work approach, as well as the HR aspect of the company’s Covid response, according to her LinkedIn profile.

Don’t miss these insights from CNBC PRO

Enterprise exposure better than consumer exposure: D.A. Davidson's Luria on the Microsoft bull case

Continue Reading

Trending