Connect with us

Published

on

Synthesia launched an option to make AI-generated avatars by recording footage of yourself with a webcam or your phone.

Synthesia

Synthesia, a British artificial intelligence startup, on Monday showed off a slew of new product updates including the ability to create your own Apple-style key presentations with AI avatars by using just a laptop webcam or your phone.

The seven-year-old firm, which is backed by Nvidia, said the new product updates will make it more of an all-encompassing video production suite for large companies, rather than just a platform that offers users the ability to create AI-generated avatars.

Among the new updates Synthesia is launching is the ability to produce AI avatars using webcams or a phone, “full body” avatars with hands and arms, and a screen recording tool that shows an AI avatar guiding you through what you’re watching.

What is Synthesia?

Synthesia, which says it’s used by nearly half of the Fortfune 500, uses AI avatars for all kinds of purposes.

These can range from creating tailored training videos to guide employees around certain processes, or generating promotional material that can be shown in the form of a video rather than an email or other textual communications.

Apple's biggest challenges to bring its AI products to China

But that hasn’t always been the case. According to co-founder and CEO Victor Riparbelli, in the first three years of the company’s story, Synthesia actually started out trying to sell its technology to Hollywood agencies and big-budget video production companies. The firm used computer vision for an AI dubbing tool that made mouth movements more lifelike for different languages.

“What we figured out was that the quality threshold to do anything with these guys was so big, no matter what we do, we’ll be a very small part of a much bigger process,” Riparbelli told CNBC in an interview at the firm’s London office.

“What was more interesting was the democratization aspect of: There are millions of people in the world who want to make video, but they’re not making video today because they don’t have the budget.”

In an Apple-style keynote, Synthesia’s CEO unveiled the firm’s new products, touting them as a more productivity-focused suite of tools for use by businesses, rather than just a platform that offers AI avatars.

Apple-style keynotes with a webcam

But now, Synthesia is introducing new software which will make it easier for users to produce a digital version of themselves from anywhere, using just a webcam and Synthesia’s software.

The company is also launching the ability to create full-body avatars. This is different to Synthesia’s current avatars, which are limited to just portrait view. Now, you can go into a studio with dozens of cameras, sensors and lights all around you to make avatars that can move their hands.

Generating hands is something that’s traditionally hard for AI to do — often because hands are only a small part of the human body and not typically the focus in visual content.

Synthesia also debuted the option to play videos of AI avatars speaking in any language they like, whether it’s English, French, German, or Chinese.

In the future, Synthesia says, it will be able to tailor AI avatars for different countries: For example, a Nigerian avatar running a user through a tutorial rather than an American.

Synthesia’s AI video assistant can produce summaries of entire articles and documents.

Synthesia

Synthesia also launched a new AI video assistant which can produce summaries of entire articles and documents. This could be a human resources specialist making a quick video explaining company benefits packages, for example.

Synthesia’s screen recording tool shows an AI avatar guiding you through what you’re watching.

Synthesia

Another big feature the company is rolling out is a new screen recording tool, which shows an AI avatar guiding you through what you’re watching.

Not chasing a ‘PR moment’

In CNBC’s interview with him, Riparbelli characterized what Synthesia is trying to do as an enterprise-focused product overhaul, which would make it more akin to giants like Microsoft, Salesforce, and Zoom in the enterprise category.

“The world has been blown away by this stuff for the last 12 to 18 to 24 months, which is awesome,” Riparbelli told CNBC.

“But now we have experimented a lot, and we have found out the right use cases for these technologies that have lasting business value. They’re not like just a short-term PR moment.”

“You need to do that business goal of reducing customer support tickets by showing videos instead of text; or sell by making videos instead of just sending out emails,” he added.

“Now people are creating workflows around that. They need better ways to achieve their business goals, not just an interface with AI models. That’s where we’re going as a company.”

Last year, Synthesia raised $90 million from investors including U.S. chipmaker Nvidia and venture capital firm Accel, in a funding round that valued it at $1 billion and giving it “unicorn” status.

The company’s competitors include AI video tools Veed, Colossyan, Elai, and HeyGen. And Chinese-owned social media app TikTok also recently debuted Symphony Assistant, a product that allows creators to make their own AI avatars.

 The company makes money through a number of subscription pricing plans ranging from $22 for a “starter” plan and $67 for a “creator” plan, to custom “enterprise” plans where pricing is based on negotiations with Synthesia’s sales team.

Continue Reading

Technology

Week in review: The Nasdaq’s worst week since April, three trades, and earnings

Published

on

By

Week in review: The Nasdaq's worst week since April, three trades, and earnings

Continue Reading

Technology

Too early to bet against AI trade, State Street suggests 

Published

on

By

Too early to bet against AI trade, State Street suggests 

Momentum and private assets: The trends driving ETFs to record inflows

State Street is reiterating its bullish stance on the artificial intelligence trade despite the Nasdaq’s worst week since April.

Chief Business Officer Anna Paglia said momentum stocks still have legs because investors are reluctant to step away from the growth story that’s driven gains all year.

“How would you not want to participate in the growth of AI technology? Everybody has been waiting for the cycle to change from growth to value. I don’t think it’s happening just yet because of the momentum,” Paglia told CNBC’s “ETF Edge” earlier this week. “I don’t think the rebalancing trade is going to happen until we see a signal from the market indicating a slowdown in these big trends.”

Paglia, who has spent 25 years in the exchange-traded funds industry, sees a higher likelihood that the space will cool off early next year.

“There will be much more focus about the diversification,” she said.

Her firm manages several ETFs with exposure to the technology sector, including the SPDR NYSE Technology ETF, which has gained 38% so far this year as of Friday’s close.

The fund, however, pulled back more than 4% over the past week as investors took profits in AI-linked names. The fund’s second top holding as of Friday’s close is Palantir Technologies, according to State Street’s website. Its stock tumbled more than 11% this week after the company’s earnings report on Monday.

Despite the decline, Paglia reaffirmed her bullish tech view in a statement to CNBC later in the week.

Meanwhile, Todd Rosenbluth suggests a rotation is already starting to grip the market. He points to a renewed appetite for health-care stocks.

“The Health Care Select Sector SPDR Fund… which has been out of favor for much of the year, started a return to favor in October,” the firm’s head of research said in the same interview. “Health care tends to be a more defensive sector, so we’re watching to see if people continue to gravitate towards that as a way of diversifying away from some of those sectors like technology.”

The Health Care Select Sector SPDR Fund, which has been underperforming technology sector this year, is up 5% since Oct. 1. It was also the second-best performing S&P 500 group this week.

Disclaimer

Continue Reading

Technology

People with ADHD, autism, dyslexia say AI agents are helping them succeed at work

Published

on

By

People with ADHD, autism, dyslexia say AI agents are helping them succeed at work

Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI.

A recent study from the UK’s Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents.

“Standing up and walking around during a meeting means that I’m not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes,” said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement).

“I’ve white-knuckled my way through the business world,” DeZao said. “But these tools help so much.”

AI tools in the workplace run the gamut and can have hyper-specific use cases, but solutions like note takers, schedule assistants and in-house communication support are common. Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who’ve previously had to find ways to fit in among a work culture not built with them in mind.

Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue.

AI ethics and neurodiverse workers

“Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do,” said Kristi Boyd, an AI specialist with the SAS data ethics practice. “It’s a smart way to make good on your organization’s AI investments.”

Boyd referred to an SAS study which found that companies investing the most in AI governance and guardrails were 1.6 times more likely to see at least double ROI on their AI investments. But Boyd highlighted three risks that companies should be aware of when implementing AI tools with neurodiverse and other individuals in mind: competing needs, unconscious bias and inappropriate disclosure.

“Different neurodiverse conditions may have conflicting needs,” Boyd said. For example, while people with dyslexia may benefit from document readers, people with bipolar disorder or other mental health neurodivergences may benefit from AI-supported scheduling to make the most of productive periods. “By acknowledging these tensions upfront, organizations can create layered accommodations or offer choice-based frameworks that balance competing needs while promoting equity and inclusion,” she explained.

Regarding AI’s unconscious biases, algorithms can (and have been) unintentionally taught to associate neurodivergence with danger, disease or negativity, as outlined in Duke University research. And even today, neurodiversity can still be met with workplace discrimination, making it important for companies to provide safe ways to use these tools without having to unwillingly publicize any individual worker diagnosis.

‘Like somebody turned on the light’

As businesses take accountability for the impact of AI tools in the workplace, Boyd says it’s important to remember to include diverse voices at all stages, implement regular audits and establish safe ways for employees to anonymously report issues.

The work to make AI deployment more equitable, including for neurodivergent people, is just getting started. The nonprofit Humane Intelligence, which focuses on deploying AI for social good, released in early October its Bias Bounty Challenge, where participants can identify biases with the goal of building “more inclusive communication platforms — especially for users with cognitive differences, sensory sensitivities or alternative communication styles.”

For example, emotion AI (when AI identifies human emotions) can help people with difficulty identifying emotions make sense of their meeting partners on video conferencing platforms like Zoom. Still, this technology requires careful attention to bias by ensuring AI agents recognize diverse communication patterns fairly and accurately, rather than embedding harmful assumptions.

DeZao said her ADHD diagnosis felt like “somebody turned on the light in a very, very dark room.”

“One of the most difficult pieces of our hyper-connected, fast world is that we’re all expected to multitask. With my form of ADHD, it’s almost impossible to multitask,” she said.

DeZao says one of AI’s most helpful features is its ability to receive instructions and do its work while the human employee can remain focused on the task at hand. “If I’m working on something and then a new request comes in over Slack or Teams, it just completely knocks me off my thought process,” she said. “Being able to take that request and then outsource it real quick and have it worked on while I continue to work [on my original task] has been a godsend.”

Continue Reading

Trending