Synthesia launched an option to make AI-generated avatars by recording footage of yourself with a webcam or your phone.
Synthesia
Synthesia, a British artificial intelligence startup, on Monday showed off a slew of new product updates including the ability to create your own Apple-style key presentations with AI avatars by using just a laptop webcam or your phone.
The seven-year-old firm, which is backed by Nvidia, said the new product updates will make it more of an all-encompassing video production suite for large companies, rather than just a platform that offers users the ability to create AI-generated avatars.
Among the new updates Synthesia is launching is the ability to produce AI avatars using webcams or a phone, “full body” avatars with hands and arms, and a screen recording tool that shows an AI avatar guiding you through what you’re watching.
What is Synthesia?
Synthesia, which says it’s used by nearly half of the Fortfune 500, uses AI avatars for all kinds of purposes.
These can range from creating tailored training videos to guide employees around certain processes, or generating promotional material that can be shown in the form of a video rather than an email or other textual communications.
But that hasn’t always been the case. According to co-founder and CEO Victor Riparbelli, in the first three years of the company’s story, Synthesia actually started out trying to sell its technology to Hollywood agencies and big-budget video production companies. The firm used computer vision for an AI dubbing tool that made mouth movements more lifelike for different languages.
“What we figured out was that the quality threshold to do anything with these guys was so big, no matter what we do, we’ll be a very small part of a much bigger process,” Riparbelli told CNBC in an interview at the firm’s London office.
“What was more interesting was the democratization aspect of: There are millions of people in the world who want to make video, but they’re not making video today because they don’t have the budget.”
In an Apple-style keynote, Synthesia’s CEO unveiled the firm’s new products, touting them as a more productivity-focused suite of tools for use by businesses, rather than just a platform that offers AI avatars.
Apple-style keynotes with a webcam
One of the biggest new features the firm showed off was an option to make AI-generated avatars by recording less than five minutes of footage using a webcam or your phone. You can also clone your voice to have the avatars speak in multiple different languages
Typically, to make an AI avatar using Synthesia’s platform, you have to go into a studio in-person. Human actors go into a recording booth, record their voice, and perform lines in front of a green screen on an actual filming set.
This is all training data to provide Synthesia’s AI algorithm with the facial and vocal nuances it needs to come up with humanlike avatars that speak in an expressive way. Earlier this year, Synthesia debuted new expressive avatars that can convey human emotions, including happiness, sadness, and frustration.
But now, Synthesia is introducing new software which will make it easier for users to produce a digital version of themselves from anywhere, using just a webcam and Synthesia’s software.
The company is also launching the ability to create full-body avatars. This is different to Synthesia’s current avatars, which are limited to just portrait view. Now, you can go into a studio with dozens of cameras, sensors and lights all around you to make avatars that can move their hands.
Generating hands is something that’s traditionally hard for AI to do — often because hands are only a small part of the human body and not typically the focus in visual content.
Synthesia also debuted the option to play videos of AI avatars speaking in any language they like, whether it’s English, French, German, or Chinese.
In the future, Synthesia says, it will be able to tailor AI avatars for different countries: For example, a Nigerian avatar running a user through a tutorial rather than an American.
Synthesia’s AI video assistant can produce summaries of entire articles and documents.
Synthesia
Synthesia also launched a new AI video assistant which can produce summaries of entire articles and documents. This could be a human resources specialist making a quick video explaining company benefits packages, for example.
Synthesia’s screen recording tool shows an AI avatar guiding you through what you’re watching.
Synthesia
Another big feature the company is rolling out is a new screen recording tool, which shows an AI avatar guiding you through what you’re watching.
Not chasing a ‘PR moment’
In CNBC’s interview with him, Riparbelli characterized what Synthesia is trying to do as an enterprise-focused product overhaul, which would make it more akin to giants like Microsoft, Salesforce, and Zoom in the enterprise category.
“The world has been blown away by this stuff for the last 12 to 18 to 24 months, which is awesome,” Riparbelli told CNBC.
“But now we have experimented a lot, and we have found out the right use cases for these technologies that have lasting business value. They’re not like just a short-term PR moment.”
“You need to do that business goal of reducing customer support tickets by showing videos instead of text; or sell by making videos instead of just sending out emails,” he added.
“Now people are creating workflows around that. They need better ways to achieve their business goals, not just an interface with AI models. That’s where we’re going as a company.”
The company’s competitors include AI video tools Veed, Colossyan, Elai, and HeyGen. And Chinese-owned social media app TikTok also recently debuted Symphony Assistant, a product that allows creators to make their own AI avatars.
The company makes money through a number of subscription pricing plans ranging from $22 for a “starter” plan and $67 for a “creator” plan, to custom “enterprise” plans where pricing is based on negotiations with Synthesia’s sales team.
Thomas Fuller | SOPA Images | Lightrocket | Getty Images
Video generation startup Luma AI said it raised $900 million in a new funding round led by Humain, an artificial intelligence company owned by Saudi Arabia’s Public Investment Fund.
The financing, which included participation from Advanced Micro Devices’ venture arm and existing investors Andreessen Horowitz, Amplify Partners and Matrix Partners, was announced at the U.S.-Saudi Investment Forum on Wednesday.
The company is now valued upwards of $4 billion, CNBC has confirmed.
Luma develops multimodal “world models” that are able to learn from not only text, but also video, audio and images in order to simulate reality. CEO Amit Jain told CNBC in an interview that these models expand beyond large language models, which are solely trained on text, to be more effective in “helping in the real, physical world.”
“With this funding, we plan to scale our and accelerate our efforts in training and then deploying these world models today,” Jain said.
Luma released Ray3 in September, the first reasoning video model that can interpret prompts to create videos, images and audio. Jain said Ray3 currently benchmarks higher than OpenAI’s Sora 2 and around the same level as Google’sVeo 3.
Humain, which was launched in May, is aiming to deliver full-stack AI capabilities to bolster Saudi Arabia’s position as a global AI hub. The company is led by industry veteran Tareq Amin, who previously ran Aramco Digital and before that was CEO of Rakuten Mobile.
Luma and Humain will also partner to build a 2-gigawatt AI supercluster, dubbed Project Halo, in Saudi Arabia. The buildout will be one of the one of the largest deployments of graphic processing units (GPUs) in the world, Jain said.
Major tech companies have been investing in supercomputers across the globe to train massive AI models. In July, Meta announced plans to build a 1-gigawatt supercluster called Prometheus, and Microsoft deployed the first supercomputing cluster using the Nvidia GB300 NVL72 platform in October.
“Our investment in Luma AI, combined with HUMAIN’s 2GW supercluster, positions us to train, deploy, and scale multimodal intelligence at a frontier level,” Amin said in a release. “This partnership sets a new benchmark for how capital, compute, and capability come together.”
The collaboration also includes Humain Create, an initiative to create sovereign AI models trained on Arabic and regional data. Along with focusing on building the world’s first Arabic video model, Jain said Luma models and capabilities will be deployed to Middle Eastern businesses.
He added that since most models are trained by scraping data from the internet, countries outside the U.S. and Asia are often less represented in AI-generated content.
“It’s really important that we bring these cultures, their identities, their representation — visual and behavioral and everything — to our model,” Jain said.
AI-generated content tools have received significant backlash over the past year from entertainment studios over copyright concerns. Luma’s flagship text-to-video platform Dream Machine garnered some accusations of copying IP earlier this year, but Jain the company has installed safeguards to prevent unwanted usage.
“Even if you really try to trick it, we are constantly improving it,” he said. “We have built very robust systems that are actually using models we trained to detect them.”
Perplexity on Wednesday announced it will roll out a free agentic shopping product for U.S. users next week, as consumers ramp up spending for the holiday season.
“The agentic part is the seamless purchase right from the answer,” Dmitry Shevelenko, Perplexity’s chief business officer, told CNBC in an interview. “Most people want to still do their own research. They want that streamlined and simplified, and so that’s the part that is agentic in this launch.”
The artificial intelligence startup has partnered with PayPal ahead of the launch, and users will eventually be able to directly purchase items from more than 5,000 merchants through Perplexity’s search engine.
Perplexity initially released a shopping offering called “Buy With Pro” for its paid subscribers late last year. The company said its new free product will be better at detecting shopping intent and will deliver more personalized results by drawing on memory from a user’s previous searches.
Perplexity declined to share whether it will earn revenue from transactions that are completed through its platform.
The startup’s competitor OpenAI announced a similar e-commerce feature called Instant Checkout in September, which allows ChatGPT users to buy items from merchants without leaving the chatbot’s interface. OpenAI has said it will take a fee from those purchases.
Read more CNBC tech news
Etsy and Shopify were named as OpenAI’s initial partners for Instant Checkout, but it also inked a deal with PayPal late last month.
Starting next year, PayPal users will be able to buy items, and PayPal merchants will be able to sell items through ChatGPT.
Michelle Gill, who leads PayPal’s agentic strategy, said the company has been building out infrastructure and protections as AI ushers in the “next era of commerce.”
Part of that means keeping consumers and merchants connected to PayPal as they engage on new platforms like Perplexity, she said.
Perplexity said PayPal merchants will serve as the merchants of record through its agentic shopping product, which will allow them to handle processes like purchases, customer service and returns directly.
Through its “Buy With Pro” offering, Perplexity had served as the intermediary that completed purchases.
Gill said PayPal’s buyer protection policies, which can help users get reimbursed if there are problems with their orders, will also apply to transactions on Perplexity.
“We’re really excited about this launch because we will see it come to life during a period that’s so organic for people to shop,” Gill said in an interview.
Nvidia founder and CEO Jensen Huang reacts during a press conference at the Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju on October 31, 2025.
Jung Yeon-je | Afp | Getty Images
Artificial intelligence chipmaker Nvidia is scheduled to report fiscal third-quarter earnings on Wednesday after the market closes.
Here’s what Wall Street is expecting, per LSEG consensus estimates:
EPS: $1.25
Revenue: $54.92 billion
Wall Street is expecting the chipmaker to guide in the current quarter to $1.43 in earnings per share on $61.66 billion of revenue. Nvidia typically provides one quarter of revenue guidance.
Anything Nvidia or CEO Jensen Huang says about the company’s outlook and its sales backlog will be closely scrutinized.
Nvidia is at the center of the AI boom, and it counts counts every major cloud company and AI lab as a customer. All of the major AI labs use Nvidia chips to develop next-generation models, and a handful of companies called hyperscalers have committed hundreds of billions of dollars to construct new data centers around Nvidia technology in unprecedented build-outs.
Last month, Huang said Nvidia had $500 billion in chip orders in calendar 2025 and 2026, including the forthcoming Rubin chip, which will start shipping in volume next year. Analysts will want to know more about what Nvidia sees coming from the AI infrastructure world next year, because all five of the top AI model developers in the U.S. use the company’s chips.
As of Tuesday, analysts polled by LSEG expect Nvidia’s sales to rise 39% in the company’s fiscal 2027, which starts in early 2026.
Investors will want to hear about Nvidia’s equity deals with customers and suppliers, including an agreement to invest in OpenAI, a deal with Nokia and an investment into former rival Intel. Nvidia has kept its pace of deal-making up, agreeing to invest $10 billion into AI company Anthropic earlier this week.
Nvidia management will also be asked about China, and the possibility that the company could gain licenses from the U.S. government to export a version of its current-generation Blackwell AI chip to the country. Analysts say Nvidia’s sales could get a boost of as much as $50 billion per year if it is allowed to sell current-generation chips to Chinese companies.