The company is making GPT-5 available to everyone, including its free users. OpenAI said the model is smarter, faster and “a lot more useful,” particularly across domains like writing, coding and health care.
“I tried going back to GPT-4, and it was quite miserable,” OpenAI CEO Sam Altman said in a briefing with reporters.
Since launching its AI chatbot ChatGPT in 2022, OpenAI has rocketed into the mainstream. The company said it expects to hit 700 million weekly active users on ChatGPT this week, and it is in talks with investors about a potential stock sale at a valuation of roughly $500 billion, as CNBC previously reported.
OpenAI said GPT-5’s hallucination rate is lower, which means the model fabricates answers less frequently. The company said it also carried out extensive safety evaluations while developing GPT-5, including 5,000 hours of testing.
Instead of outright refusing to answer users’ questions if they are potentially risky, GPT-5 will use “safe completions,” OpenAI said. This means the model will give high-level responses within safety constraints that can’t be used to cause harm.
“GPT-5 has been trained to recognize when a task can’t be finished, avoid speculation and can explain limitations more clearly, which reduces unsupported claims compared to prior models,” said Michelle Pokrass, a post-training lead at OpenAI.
Read more CNBC tech news
During the briefing, OpenAI demonstrated how GPT-5 can be used for “vibe coding,” which is a term for when users generate software with AI based on a simple written prompt.
The company asked GPT-5 to create a web app that could help an English speaker learn French. The app had to have an engaging theme and include activities like flash cards and quizzes as well as a way to track daily progress. OpenAI submitted the same prompt into two GPT-5 windows, and it generated two different apps within seconds.
The apps had “some rough edges,” an OpenAI lead said, but users can make additional tweaks to the AI-generated software, like changing the background or adding additional tabs, as they see fit.
GPT-5 is rolling out to OpenAI’s Free, Plus, Pro and Team users on Thursday. This launch will be the first time that Free users have access to a reasoning model, which is a type of model that “thinks,” or carries out an internal chain of thought, before responding. If Free users hit their usage cap, they’ll have access to GPT-5 mini.
OpenAI’s Plus users have higher usage limits, and Pro users have unlimited access to GPT-5 as well as access to GPT-5 Pro. ChatGPT Edu and ChatGPT Enterprise users will get access to GPT-5 roughly a week from Thursday.
“It’s hard to believe it’s only been two and a half years since @sama joined us in Redmond to show the world GPT-4 for the first time in Bing, and it’s incredible to see how far we’ve come since that moment,” Microsoft CEO Satya Nadella wrote in a Thursday X post, referring to OpenAI CEO Sam Altman’s appearance at Microsoft headquarters in Washington in February 2023.
The new model is coming to Microsoft products Thursday, according to a company blog post. Microsoft 365 Copilot is getting GPT-5, as well as the Copilot for consumers and the Azure AI Foundry that developers can use to incorporate AI models into third-party applications.
Box, a company that helps enterprises manage their computer files, has been testing GPT-5 across a wide variety of data sets in recent weeks.
Aaron Levie, the CEO of Box, said previous AI models have failed many of the company’s most advanced tests because they struggle to make sense of complex math or logic within long documents. But Levie said GPT-5 is a “complete breakthrough.”
“The model is able to retain way more of the information that it’s looking at, and then use a much higher level of reasoning and logic capabilities to be able to make decisions,” Levie told CNBC in an interview.
OpenAI is releasing three different versions of the model for developers through its application programming interface, or API. Those versions, gpt-5, gpt-5-mini and gpt-5-nano, are designed for different cost and latency needs.
Earlier this week, OpenAI released two open-weight language models for the first time since it rolled out GPT-2 in 2019. Those models were built to serve as lower-cost options that developers, researchers and companies can easily run and customize.
But with GPT-5, OpenAI also has a broader consumer audience in mind. The company said interacting with the model feels natural and “more human.”
Altman said GPT-5 is like having a team of Ph.D.-level experts on hand at any time.
“People are limited by ideas, but not really the ability to execute, in many new ways,” he said.
The launch of an Instagram feature that details users’ geolocation data illicited backlash from social media users on Thursday.
Meta debuted the Instagram Map tool on Wednesday, pitching the feature as way to “stay up-to-date with friends” by letting users share their “last active location.” The tool is akin to Snapchat’s Snap Map feature that lets people see where their friends are posting from.
Although Meta said in a blog post that the feature’s “location sharing is off unless you opt in,” several social media users said in posts that they were worried that was not the case.
“I can’t believe Instagram launched a map feature that exposes everyone’s location without any warning,” said one user who posted on Threads, Meta’s micro-blogging service.
Another Threads user said they were concerned that bad actors could exploit the map feature by spying on others.
“Instagram randomly updating their app to include a maps feature without actually alerting people is so incredibly dangerous to anyone who has a restraining order and actively making sure their abuser can’t stalk their location online…Why,” said the user in a Threads post.
Instagram chief Adam Mosseri responded to the complaints on Threads, disputing the notion that the map feature is exposing people’s locations against their will.
“We’re double checking everything, but so far it looks mostly like people are confused and assume that, because they can see themselves on the map when they open, other people can see them too,” Mosseri wrote on Thursday. “We’re still checking everything though to make sure nobody shares location without explicitly deciding to do so, which, by the way, requires a double consent by design (we ask you to confirm after you say you want to share).”
Still, some Instagram users claimed that that their locations were being shared despite not opting in to using the map feature.
“Mine was set to on and shared with everyone in the app,” said a user in a Threads post. “My location settings on my phone for IG were set to never. So it was not automatically turned off for me.
A Meta spokesperson reiterated Mosseri’s comments in a statement and said “Instagram Map is off by default, and your live location is never shared unless you choose to turn it on.”
“If you do, only people you follow back — or a private, custom list you select — can see your location,” the spokesperson said.
Tesla’s vice president of hardware design engineering, Pete Bannon, is leaving the company after first joining in 2016 from Apple, CNBC has confirmed.
Bannon was leading the development of Tesla’s Dojo supercomputer and reported directly to Musk. Bloomberg first reported on Bannon’s departure, and added that Musk ordered his team to shut down, with engineers in the group getting reassigned to other initiatives.
Tesla didn’t immediately respond to a request for comment.
Since early last year, Musk has been trying to convince shareholders that Tesla, his only publicly traded business, is poised to become an an artificial intelligence and robotics powerhouse, and not just an electric vehicle company.
A centerpiece of the transformation was Dojo, a custom-built supercomputer designed to process and train AI models drawing on the large amounts of video and other data captured by Tesla vehicles.
Tesla’s focus on Dojo and another computing cluster called Cortex were meant to improve the company’s advanced driver assistance systems, and to enable Musk to finally deliver on his promise to turn existing Teslas into robotaxis.
On Tesla’s earnings call in July, Musk said the company expected its newest version of Dojo to be “operating at scale sometime next year, with scale being somewhere around 100,000 H-100 equivalents,” referring to a supercomputer built using Nvidia’s state of the art chips.
Tesla recently struck a $16.5 billion deal with Samsung to produce more of its own A16 chips with the company domestically.
Tesla is running a test Robotaxi service in Austin, Texas, and a related car service in San Francisco. In Austin, the company’s vehicles require a human safety supervisor in the front passenger seat ready to intervene if necessary. In San Francisco, the car service is operated by human drivers, though invited users can hail a ride through a “Tesla Robotaxi” app.
On the earnings call, Musk faced questions about how he sees Tesla and his AI company, xAI, keeping their distance given that they could be competing against one another for AI talent.
Musk said the companies “are doing different things.” He said, “xAI is doing like terabyte scale models and multi-terabyte scale models.” Tesla uses “100x smaller models,” he said, with the automaker focused on “real-world AI,” for its cars and robots and xAI focused on developing software that strives for “artificial super intelligence.”
Musk also said that some engineers wouldn’t join Tesla because “they wanted to work on AGI,” one reason he said he formed a new company.
Tesla has experienced an exodus of top talent this year due to a combination of job terminations and resignations. Milan Kovac, who was Tesla’s head of Optimus robotics engineering, departed, as did David Lau, a vice president of software engineering, and Omead Afshar, Musk’s former chief of staff.
Here’s how the company did based on average analysts’ estimates compiled by LSEG:
Loss: Loss per share of 24 cents.
Revenue: $61 million vs. $55.2 million expected
The virtual care company’s revenue increased 49% in its second quarter from $41.21 million a year earlier. The company reported a net loss of $5.31 million, or a 24-cent loss per share, compared to a net loss of $10.69 million, or $1.40 loss per share, during the same period last year.
“We believe our Q2 performance reflects Omada’s ability to capture tailwinds in cardiometabolic care, to effectively commercialize our GLP-1 Care Track, and to leverage advances in artificial intelligence for the benefit of our members,” Omada CEO Sean Duffy said in a release.
Read more CNBC tech news
For its full year, Omada expects to report revenue between $235 million to $241 million, while analysts were expecting $222 million. The company said it expects to report an adjusted EBITDA loss of $9 million to $5 million for the full year, while analysts polled by FactSet expected a wider loss of $20.2 million.
Omada, founded in 2012, offers virtual care programs to support patients with chronic conditions like prediabetes, diabetes and hypertension. The company describes its approach as a “between-visit care model” that is complementary to the broader health-care ecosystem.
The stock opened at $23 in its debut on the Nasdaq in June. At market close on Thursday, shares closed at $19.46.
Omada said it finished its second quarter with 752,000 total members, up 52% year over year.
The company will discuss the results during its quarterly call with investors at 4:30 p.m. ET.