Connect with us

Published

on

2024 is set up to be the biggest global election year in history. It coincides with the rapid rise in deepfakes. In APAC alone, there was a surge in deepfakes by 1530% from 2022 to 2023, according to a Sumsub report.

Fotografielink | Istock | Getty Images

Ahead of the Indonesian elections on Feb. 14, a video of late Indonesian president Suharto advocating for the political party he once presided over went viral. 

The AI-generated deepfake video that cloned his face and voice racked up 4.7 million views on X alone. 

This was not a one-off incident. 

In Pakistan, a deepfake of former prime minister Imran Khan emerged around the national elections, announcing his party was boycotting them. Meanwhile, in the U.S., New Hampshire voters heard a deepfake of President Joe Biden’s asking them to not vote in the presidential primary. 

Deepfakes of politicians are becoming increasingly common, especially with 2024 set up to be the biggest global election year in history. 

Reportedly, at least 60 countries and more than four billion people will be voting for their leaders and representatives this year, which makes deepfakes a matter of serious concern.

Rise of election deepfake risks

According to a Sumsub report in November, the number of deepfakes across the world rose by 10 times from 2022 to 2023. In APAC alone, deepfakes surged by 1,530% during the same period.

Online media, including social platforms and digital advertising, saw the biggest rise in identity fraud rate at 274% between 2021 and 2023. Professional services, healthcare, transportation and video gaming were were also among industries impacted by identity fraud.

Asia is not ready to tackle deepfakes in elections in terms of regulation, technology, and education, said Simon Chesterman, senior director of AI governance at AI Singapore. 

In its 2024 Global Threat Report, cybersecurity firm Crowdstrike reported that with the number of elections scheduled this year, nation-state actors including from China, Russia and Iran are highly likely to conduct misinformation or disinformation campaigns to sow disruption. 

“The more serious interventions would be if a major power decides they want to disrupt a country’s election — that’s probably going to be more impactful than political parties playing around on the margins,” said Chesterman. 

Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there’s time to push it back in.

Simon Chesterman

Senior director AI Singapore

However, most deepfakes will still be generated by actors within the respective countries, he said. 

Carol Soon, principal research fellow and head of the society and culture department at the Institute of Policy Studies in Singapore, said domestic actors may include opposition parties and political opponents or extreme right wingers and left wingers.

Deepfake dangers

How easy is it to make a deepfake video?

Adam Meyers, head of counter adversary operations at CrowdStrike, said that deepfakes may also invoke confirmation bias in people: “Even if they know in their heart it’s not true, if it’s the message they want and something they want to believe in they’re not going to let that go.”  

Chesterman also said that fake footage which shows misconduct during an election such as ballot stuffing, could cause people to lose faith in the validity of an election.

On the flip side, candidates may deny the truth about themselves that may be negative or unflattering and attribute that to deepfakes instead, Soon said. 

Deepfakes in the 2024 election: What you need to know

Who should be responsible?

There is a realization now that more responsibility needs to be taken on by social media platforms because of the quasi-public role they play, said Chesterman. 

In February, 20 leading tech companies, including MicrosoftMetaGoogleAmazonIBM as well as Artificial intelligence startup OpenAI and social media companies such as Snap, TikTok and X announced a joint commitment to combat the deceptive use of AI in elections this year. 

The tech accord signed is an important first step, said Soon, but its effectiveness will depend on implementation and enforcement. With tech companies adopting different measures across their platforms, a multi-prong approach is needed, she said. 

Tech companies will also have to be very transparent about the kinds of decisions that are made, for example, the kinds of processes that are put in place, Soon added. 

But Chesterman said it is also unreasonable to expect private companies to carry out what are essentially public functions. Deciding what content to allow on social media is a hard call to make, and companies may take months to decide, he said. 

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them

“We should not just be relying on the good intentions of these companies,” Chesterman added. “That’s why regulations need to be established and expectations need to be set for these companies.”

Towards this end, Coalition for Content Provenance and Authenticity (C2PA), a non-profit, has introduced digital credentials for content, which will show viewers verified information such as the creator’s information, where and when it was created, as well as whether generative AI was used to create the material.

C2PA member companies include Adobe, Microsoft, Google and Intel.

OpenAI has announced it will be implementing C2PA content credentials to images created with its DALL·E 3 offering early this year.

“I think it’d be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ Like, we’re gonna have to watch this relatively closely this year [with] super tight monitoring [and] super tight feedback.”

Sam Altman

CEO OpenAI

In a Bloomberg House interview at the World Economic Forum in January, OpenAI founder and CEO Sam Altman said the company was “quite focused” on ensuring its technology wasn’t being used to manipulate elections.

“I think our role is very different than the role of a distribution platform” like a social media site or news publisher, he said. “We have to work with them, so it’s like you generate here and you distribute here. And there needs to be a good conversation between them.”

Meyers suggested creating a bipartisan, non-profit technical entity with the sole mission of analyzing and identifying deepfakes.

“The public can then send them content they suspect is manipulated,” he said. “It’s not foolproof but at least there’s some sort of mechanism people can rely on.”

But ultimately, while technology is part of the solution, a large part of it comes down to consumers, who are still not ready, said Chesterman. 

Soon also highlighted the importance of educating the public. 

“We need to continue outreach and engagement efforts to heighten the sense of vigilance and consciousness when the public comes across information,” she said. 

The public needs to be more vigilant; besides fact checking when something is highly suspicious, users also need to fact check critical pieces of information especially before sharing it with others, she said. 

“There’s something for everyone to do,” Soon said. “It’s all hands on deck.”

— CNBC’s MacKenzie Sigalos and Ryan Browne contributed to this report.

Continue Reading

Technology

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Published

on

By

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Lemon8, a photo-sharing app by Bytedance, and RedNote, a Shanghai-based content-sharing platform, have seen a surge in popularity in the U.S. as “TikTok refugees” migrate to alternative platforms ahead of a potential ban. 

Now a law that could see TikTok shut down in the U.S. threatens to ensnare these Chinese social media apps, and others gaining traction as TikTok-alternatives, legal experts say. 

As of Wednesday, RedNote — known as Xiaohongshu in Chinawas the top free app on the U.S. iOS store, with Lemon8 taking the second spot. 

The U.S. Supreme Court is set to rule on the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, or PAFACA, that would lead to the TikTok app being banned in the U.S. if its Beijing-based owner, ByteDance, doesn’t divest it by Jan. 19.

While the legislation explicitly names TikTok and ByteDance, experts say its scope is broad and could open the door for Washington to target additional Chinese apps. 

“Chinese social media apps, including Lemon8 and RedNote, could also end up being banned under this law,” Tobin Marcus, head of U.S. policy and politics at New York-based research firm Wolfe Research, told CNBC. 

If the TikTok ban is upheld, it will be unlikely that the law will allow potential replacements to originate from China without some form of divestiture, experts told CNBC.

PAFACA automatically applies to Lemon8 as it’s a subsidiary of ByteDance, while RedNote could fall under the law if its monthly average user base in the U.S. continues to grow, said Marcus. 

The legislation prohibits distributing, maintaining, or providing internet hosting services to any “foreign adversary controlled application.” 

These applications include those connected to ByteDance or TikTok or a social media company that is controlled by a “foreign adversary” and has been determined to present a significant threat to national security.

The wording of the legislation is “quite expansive” and would give incoming president Donald Trump room to decide which entities constitute a significant threat to national security, said Carl Tobias, Williams Chair in Law at the University of Richmond. 

Xiaomeng Lu, Director of Geo‑technology at political risk consultancy Eurasia Group, told CNBC that the law will likely prevail, even if its implementation and enforcement are delayed. Regardless, she expects Chinese apps in the U.S. will continue to be the subject of increased regulatory action moving forward.

“The TikTok case has set a new precedent for Chinese apps to get targeted and potentially shut down,” Lu said.

She added that other Chinese apps that could be impacted by increased scrutiny this year include popular Chinese e-commerce platform Temu and Shein. U.S. officials have accused the apps of posing data risks, allegations similar to those levied against TikTok.

The fate of TikTok rests with Supreme Court after the platform and its parent company filed a suit against the U.S. government, saying that invoking PAFACA violated constitutional protections of free speech.

TikTok’s argument is that the law is unconstitutional as applied to them specifically, not that it is unconstitutional per se, said Cornell Law Professor Gautam Hans. “So, regardless of whether TikTok wins or loses, the law could still potentially be applied to other companies,” he said. 

The law’s defined purview is broad enough that it could be applied to a variety of Chinese apps deemed to be a national security threat, beyond traditional social media apps in the mold of TikTok, Hans said. 

Trump, meanwhile, has urged the U.S. Supreme Court to hold off on implementing PAFACA so he can pursue a “political resolution” after taking office. Democratic lawmakers have also urged Congress and President Joe Biden to extend the Jan. 19 deadline

Continue Reading

Technology

Nvidia-backed AI video platform Synthesia doubles valuation to $2.1 billion

Published

on

By

Nvidia-backed AI video platform Synthesia doubles valuation to .1 billion

Synthesia is a platform that lets users create AI-generated clips with human avatars that can speak in multiple languages.

Synthesia

LONDON — Synthesia, a video platform that uses artificial intelligence to generate clips featuring multilingual human avatars, has raised $180 million in an investment round valuing the startup at $2.1 billion.

That’s more than than double the $1 billion Synthesia was worth in its last financing in 2023.

The London-based startup said Wednesday that the funding round was led by venture firm NEA with participation from Atlassian Ventures, World Innovation Lab and PSP Growth.

NEA counts Uber and TikTok parent company ByteDance among its portfolio companies. Synthesia is also backed by chip giant Nvidia.

Victor Riparbelli, CEO of Synthesia, told CNBC that investors appraised the businesses differently from other companies in the space due to its focus on “utility.”

“Of course, the hype cycle is beneficial to us,” Riparbelli said in an interview. “For us, what’s important is building an actually good business.”

Synthesia isn’t “dependent” on venture capital — as opposed to companies like OpenAI, Anthropic and Mistral, Riparbelli added.

These startups have raised billions of dollars at eye-watering valuations while burning through sizable amounts of money to train and develop their foundational AI models.

Read more CNBC reporting on AI

Synthesia’s not the only startup shaking up the world of video production with AI. Other startups offer solutions for producing and editing video content with AI, like Veed.io and Runway.

Meanwhile, the likes of OpenAI and Adobe have also developed generative AI tools for video creation.

Eric Liaw, a London-based partner at VC firm IVP, told CNBC that companies at the application layer of AI haven’t garnered as much investor hype as firms in the infrastructure layer.

“The amount of money that the application layer companies need to raise isn’t as large — and therefore the valuations aren’t necessarily as eye popping” as companies like Nvidia,” Liaw told CNBC last month.

Riparbelli said that money raised from the latest financing round would be used to invest in “more of the same,” furthering product development and investing more into security and compliance.

Last year, Synthesia made a series of updates to its platform, including the ability to produce AI avatars using a laptop webcam or phone, full-body avatars with arms and hands and a screen recording tool that has an AI avatar guide users through what they’re viewing.

On the AI safety front, in October Synthesia conducted a public red team test for risks around online harms, which demonstrated how the firm’s compliance controls counter attempts to create non-consensual deepfakes of people or use its avatars to encourage suicide, adult content or gambling.

The National Institute of Standards and Technology test was led by Rumman Chowdhury, a renowned data scientist who was formerly head of AI ethics at Twitter — before it became known as X under Elon Musk.

Riparbelli said that Synthesia is seeing increased interest from large enterprise customers, particularly in the U.S., thanks to its focus on security and compliance.

More than half of Synthesia’s annual revenue now comes from customers in the U.S., while Europe accounts for almost half.

Synthesia has also been ramping up hiring. The company recently tapped former Amazon executive Peter Hill as its chief technology officer. The company now employs over 400 people globally.

Synthesia’s announcement follows the unveiling of Prime Minister Keir Starmer’s 50-point plan to make the U.K. a global leader in AI.

U.K. Technology Minister Peter Kyle said the investment “showcases the confidence investors have in British tech” and “highlights the global leadership of U.K.-based companies in pioneering generative AI innovations.”

Continue Reading

Technology

SEC sues Elon Musk, alleging failure to properly disclose Twitter ownership

Published

on

By

SEC sues Elon Musk, alleging failure to properly disclose Twitter ownership

Beata Zawrzel | Nurphoto | Getty Images

The SEC filed a lawsuit against Elon Musk on Tuesday, alleging the billionaire committed securities fraud in 2022 by failing to disclose his ownership in Twitter and buying shares at “artificially low prices.”

Musk, who is also CEO of Tesla and SpaceX, purchased Twitter for $44 billion, later changing the name of the social network to X. Prior to the acquisition he’d built up a position in the company of greater than 5%, which would’ve required disclosing his holding to the public.

According to the SEC complaint, filed in U.S. District Court in Washington, D.C., Musk withheld that material information, “allowing him to underpay by at least $150 million for shares he purchased after his financial beneficial ownership report was due.”

The SEC had been investigating whether Musk, or anyone else working with him, committed securities fraud in 2022 as the Tesla CEO sold shares in his car company and shored up his stake in Twitter ahead of his leveraged buyout. Musk said in a post on X last month that the SEC issued a “settlement demand,” pressuring him to agree to a deal including a fine within 48 hours or “face charges on numerous counts” regarding the purchase of shares.

Musk’s lawyer, Alex Spiro, said in an emailed statement that the action is an admission by the SEC that “they cannot bring an actual case.” He added that Musk “has done nothing wrong” and called the suit a “sham” and the result of a “multi-year campaign of harassment,” culminating in a “single-count ticky tak complaint.”

Musk is just a week away from having a potentially influential role in government, as President-elect Donald Trump’s second term begins on Jan. 20. Musk, who was a major financial backer of Trump in the latter stages of the campaign, is poised to lead an advisory group that will focus in part on reducing regulations, including those that affect Musk’s various companies.

In July, Trump vowed to fire SEC chairman Gary Gensler. After Trump’s election victory, Gensler announced that he would be resigning from his post instead.

In a separate civil lawsuit concerning the Twitter deal, the Oklahoma Firefighters Pension and Retirement System sued Musk, accusing him of deliberately concealing his progressive investments in the social network and intent to buy the company. The pension fund’s attorneys argued that Musk, by failing to clearly disclose his investments, had influenced other shareholders’ decisions and put them at a disadvantage.

The SEC said that Musk crossed the 5% ownership threshold in March 2022 and would have been required to disclose his holdings by March 24.

“On April 4, 2022, eleven days after a report was due, Musk finally publicly disclosed his beneficial ownership in a report with the SEC, disclosing that he had acquired over nine percent of Twitter’s outstanding stock,” the complaint says. “That day, Twitter’s stock price increased more than 27% over its previous day’s closing price.”

The SEC alleges that Musk spent over $500 million purchasing more Twitter shares during the time between the required disclosure and the day of his actual filing. That enabled him to buy stock from the “unsuspecting public at artificially low prices,” the complaint says. He “underpaid” Twitter shareholders by over $150 million during that period, according to the SEC.

In the complaint, the SEC is seeking a jury trial and asks that Musk be forced to “pay disgorgement of his unjust enrichment” as well as a civil penalty.

This story is developing.

Continue Reading

Trending