Dave Limp, senior vice president of devices and services at Amazon.com Inc., speaks during the Amazon Devices and Services event at the HQ2 campus in Arlington, Virginia, on Sept. 20, 2023.
Al Drago | Bloomberg | Getty Images
Amazon introduced a “smarter and more conversational” version of its Alexa voice assistant that the company hopes will bolster its position in the tech industry’s artificial intelligence race.
The company hosts an annual devices bonanza, where it typically unveils a smattering of new hardware and software products. In his final keynote address at the event on Wednesday, Amazon’s devices chief Dave Limp showed off a demo of an updated Alexa that’s freshly equipped with features powered by generative AI.
Limp, a 13-year veteran of Amazon, plans to step down from his role later this year.
From an event space at its new second headquarters in northern Virginia, Amazon showed a montage in which Alexa users were seen asking an Echo smart speaker for information such as the “best dates to travel to Puerto Rico.” One man requested that Alexa tell him a story about balloons, before abruptly changing his mind and asking for a tale about Jell-O.
There were a few hiccups during Limp’s demo. At times, Alexa lagged in its response, and at a few points, Limp had to repeat his question to get an answer.
Amazon calls the new feature “Let’s chat,” and said it will be available as an “early preview” for existing Echo owners in the coming weeks.
The new Alexa will have a more humanlike voice and is able to hold more natural conversations without being prompted by a wake word. It will also learn about users with each new interaction.
Similar to ChatGPT or other generative AI applications, Alexa will be able to compose messages for users and send them on their behalf. As an example, Amazon showed an invitation that Alexa wrote to a friend, asking the person to come over for a football game.
Rohit Prasad, a senior vice president at Amazon and head scientist overseeing generative AI, gave another sports example.
“The Red Sox are my favorite team,” Prasad said. “Imagine if they won, then Alexa would respond in a joyful voice. If they lost, it will be empathetic to me.”
Amazon previewed ways it’s using AI to better operate smart homes. With upcoming Alexa updates, users will be able to make more conversational requests, like asking the voice assistant to make their lights “look spooky” or say “Alexa, there’s a mess in here,” prompting a robot vacuum to switch on and suck up crumbs.
Limp employed the phrase “AI hallucinations,” a term that describes mistakes made by AI models, to explain how Alexa would do better.
“It would be incredibly frustrating if it hallucinated and turned on the wrong light over and over again,” Limp said, adding that Amazon’s AI models are fine-tuned to be able to work with various smart home applications, so that when “you ask it to turn on the living room light, it’s able to execute that correctly.”
Amazon also debuted new hardware, including an updated Echo Show 8 smart speaker. The device uses computer vision to adjust its display based on where the user is standing in a room. If they’re farther away, it will show fewer items on screen, but as they move closer, it will show more detailed information. Amazon said the device costs $150 and will ship in October.
It also unveiled a $120 Fire TV sound bar that’s available starting Wednesday, and two new Fire TV Sticks that the company says are faster and feature upgraded processors.
Amazon showed a new feature coming to the Alexa App and Echo Hubs, called Map View, which is essentially a digital floor plan of a user’s home. The feature is designed to make it simpler for users to manage their smart home devices. It could also provide a wealth of valuable data for Amazon to understand how people organize their smart home. Amazon says it’s opt-in only, and users select which rooms they want to add to their floor plan. They’re able to delete the data at any time.
Sam Altman, CEO of OpenAI, attends the annual Allen and Co. Sun Valley Media and Technology Conference at the Sun Valley Resort in Sun Valley, Idaho, on July 8, 2025.
David A. Grogan | CNBC
OpenAI on Monday announced it is taking an ownership stake in Thrive Holdings, a company that was launched by one of its major investors, Thrive Capital, in April.
The startup said it will embed engineering, research and product teams within Thrive Holdings’ companies to help accelerate their AI adoption and boost cost efficiency.
Thrive Holdings buys, owns and runs companies that it believes could benefit from technologies like artificial intelligence. It operates in sectors that are “core to the real economy,” starting with accounting and IT services, according to its website.
OpenAI, which is valued at $500 billion, did not disclose the financial terms of the agreement.
“We are excited to extend our partnership with OpenAI to embed their frontier models, products, and services into sectors we believe have tremendous potential to benefit from technological innovation and adoption,” Joshua Kushner, CEO and founder of Thrive Capital and Thrive Holdings, said in a statement.
It’s the latest example of OpenAI’s circular dealmaking.
The partnership is structured in a way that aligns the incentives of OpenAI and Thrive Holdings long term, according to a person familiar with the deal, who asked not to be named because the details are private.
If Thrive Holdings’ companies succeed, the size of OpenAI’s stake will grow.
It also acts as a way for OpenAI to get compensated for its services, according to another person familiar with the agreement who declined to be named because the details are confidential.
“This partnership with Thrive Holdings is about demonstrating what’s possible when frontier AI research and deployment are rapidly deployed across entire organizations to revolutionize how businesses work and engage with customers,” OpenAI COO Brad Lightcap said in a statement.
OpenAI also announced a collaboration with the consulting firm Accenture on Monday.
The startup said its business offering, ChatGPT Enterprise, will roll out to “tens of thousands” of Accenture employees.
Artificial intelligence startup Runway on Monday announced Gen 4.5, a new video model that outperforms similar models from Google and OpenAI in an independent benchmark.
Gen 4.5 allows users to generate high-definition videos based on written prompts that describe the motion and action they want. Runway said the model is good at understanding physics, human motion, camera movements and cause and effect.
The model holds the No. 1 spot on the Video Arena leaderboard, which is maintained by the independent AI benchmarking and analysis company Artificial Analysis. To determine the text-to-video model rankings, people compare two different model outputs and vote for their favorite without knowing which companies are behind them.
Google’s Veo 3 model holds second place on the leaderboard, and OpenAI’s Sora 2 Pro model is in seventh place.
“We managed to out-compete trillion-dollar companies with a team of 100 people,” Runway CEO Cristóbal Valenzuela told CNBC in an interview. “You can get to frontiers just by being extremely focused and diligent.”
Read more CNBC tech news
Runway was founded in 2018 and earned a spot on CNBC’s Disruptor 50 list this year. It conducts AI research and builds video and world models, which are models that are trained on video and observational data to better reflect how the physical world works.
The startup’s customers include media organizations, studios, brands, designers, creatives and students. Its valuation has swelled to $3.55 billion, according to PitchBook.
Valenzuela said Gen 4.5 was codenamed “David” in a nod to the biblical story of David and Goliath. The model was “an overnight success that took like seven years,” he said.
“It does feel like a very interesting moment in time where the era of efficiency and research is upon us,” Valenzuela said. “[We’re] excited to be able to make sure that AI is not monopolized by two or three companies.”
Gen 4.5 is rolling out gradually, but it will be available to all of Runway’s customers by the end of the week. Valenzuela said it’s the first of several major releases that the company has in store.
“It will be available through Runway’s platform, its application programming interface and through some of the company’s partners,” he said.
Nvidia on Monday announced it has purchased $2 billion of Synopsys‘ common stock as part of a strategic partnership to accelerate computing and artificial intelligence engineering solutions.
As part of the multiyear partnership, Nvidia will help Synopsys accelerate its portfolio of compute-intensive applications, advance agentic AI engineering, expand cloud access and develop joint go-to-market initiatives, according to a release. Nvidia said it purchased Synopsys’ stock at $414.79 per share.
“Our partnership with Synopsys harnesses the power of Nvidia accelerated computing and AI to reimagine engineering and design — empowering engineers to invent the extraordinary products that will shape our future,” Nvidia CEO Jensen Huang said in the release.
Synopsys stock climbed 3%. Nvidia shares rose slightly.
Tune in at 9:30 a.m. ET as Nvidia CEO Jensen Huang and Synopsys CEO Sassine Ghazi join CNBC TV to discuss the partnership. Watch in real time on CNBC+ or the CNBC Pro stream.
Nvidia has been one of the biggest beneficiaries of the AI boom because it makes the graphics processing units, or GPUs, that are key to building and training AI models and running large workloads.
Synopsys offers services including silicon design and electronic design automation that help its customers build AI-powered products.
“The complexity and cost of developing next-generation intelligent systems demands engineering solutions with a deeper integration of electronics and physics, accelerated by AI capabilities and compute,” Synopsys CEO Sassine Ghazi said in a statement.
The partnership is not exclusive, which means that Nvidia and Synopsys can still work with other companies in the ecosystem.
Both companies will hold a press conference to discuss the announcement at 10 a.m. ET.