Connect with us

Published

on

Disinformation is expected to be among the top cyber risks for elections in 2024.

Andrew Brookes | Image Source | Getty Images

Britain is expected to face a barrage of state-backed cyber attacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC. 

Brits will vote on May 2 in local elections, and a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has not yet committed to a date.

The votes come as the country faces a range of problems including a cost-of-living crisis and stark divisions over immigration and asylum.

“With most U.K. citizens voting at polling stations on the day of the election, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” Todd McKinnon, CEO of identity security firm Okta, told CNBC via email. 

It wouldn’t be the first time.

In 2016, the U.S. presidential election and U.K. Brexit vote were both found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state-affiliated groups, although Moscow denies these claims.

State actors have since made routine attacks in various countries to manipulate the outcome of elections, according to cyber experts. 

Meanwhile, last week, the U.K. alleged that Chinese state-affiliated hacking group APT 31 attempted to access U.K. lawmakers’ email accounts, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a technology firm in Wuhan believed to be a front for APT 31.

The U.S., Australia, and New Zealand followed with their own sanctions. China denied allegations of state-sponsored hacking, calling them “groundless.”

Cybercriminals utilizing AI 

Cybersecurity experts expect malicious actors to interfere in the upcoming elections in several ways — not least through disinformation, which is expected to be even worse this year due to the widespread use of artificial intelligence. 

Synthetic images, videos and audio generated using computer graphics, simulation methods and AI — commonly referred to as “deep fakes” — will be a common occurrence as it becomes easier for people to create them, say experts.  

State-backed cyber attacks are on the rise this year: DXC Technology

“Nation-state actors and cybercriminals are likely to utilize AI-powered identity-based attacks like phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions,” Okta’s McKinnon added.  

“We’re also sure to see an influx of AI and bot-driven content generated by threat actors to push out misinformation at an even greater scale than we’ve seen in previous election cycles.”

The cybersecurity community has called for heightened awareness of this type of AI-generated misinformation, as well as international cooperation to mitigate the risk of such malicious activity. 

Top election risk

Adam Meyers, head of counter adversary operations for cybersecurity firm CrowdStrike, said AI-powered disinformation is a top risk for elections in 2024. 

“Right now, generative AI can be used for harm or for good and so we see both applications every day increasingly adopted,” Meyers told CNBC. 

China, Russia and Iran are highly likely to conduct misinformation and disinformation operations against various global elections with the help of tools like generative AI, according to Crowdstrike’s latest annual threat report.  

“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation states like Russia or China or Iran can leverage generative AI and some of the newer technology to craft messages and to use deep fakes to create a story or a narrative that is compelling for people to accept, especially when people already have this kind of confirmation bias, it’s extremely dangerous.”

A key problem is that AI is reducing the barrier to entry for criminals looking to exploit people online. This has already happened in the form of scam emails that have been crafted using easily accessible AI tools like ChatGPT. 

Hackers are also developing more advanced — and personal — attacks by training AI models on our own data available on social media, according to Dan Holmes, a fraud prevention specialist at regulatory technology firm Feedzai.

“You can train those voice AI models very easily … through exposure to social [media],” Holmes told CNBC in an interview. “It’s [about] getting that emotional level of engagement and really coming up with something creative.”

In the context of elections, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labour Party, abusing party staffers was posted to the social media platform X in October 2023. The post racked up as many as 1.5 million views, according to fact correction charity Full Fact.

It’s just one example of many deepfakes that have cybersecurity experts worried about what’s to come as the U.K. approaches elections later this year.

Elections a test for tech giants

Measures to tackle cyber threat may be implemented before midterms: Analyst

Deep fake technology is becoming a lot more advanced, however. And for many tech companies, the race to beat them is now about fighting fire with fire. 

“Deepfakes went from being a theoretical thing to being very much live in production today,” Mike Tuchen, CEO of Onfido, told CNBC in an interview last year. 

“There’s a cat and mouse game now where it’s ‘AI vs. AI’ — using AI to detect deepfakes and mitigating the impact for our customers is the big battle right now.” 

Cyber experts say it’s becoming harder to tell what’s real — but there can be some signs that content is digitally manipulated. 

AI uses prompts to generate text, images and video, but it doesn’t always get it right. So for example, if you’re watching an AI-generated video of a dinner, and the spoon suddenly disappears, that’s an example of an AI flaw. 

“We’ll certainly see more deepfakes throughout the election process but an easy step we can all take is verifying the authenticity of something before we share it,” Okta’s McKinnon added.

Continue Reading

Technology

Microsoft launches consumption-based 365 Copilot Chat option for corporate users

Published

on

By

Microsoft launches consumption-based 365 Copilot Chat option for corporate users

Microsoft Chairman and CEO Satya Nadella speaks during the Microsoft May 20 Briefing event at Microsoft in Redmond, Washington, on May 20, 2024. Nadella unveiled a new category of PC on Monday that features generative artificial intelligence tools built directly into Windows, the company’s world leading operating system.

Jason Redmond | AFP | Getty Images

Microsoft on Wednesday announced a tier of its Copilot assistant for corporate users with a consumption-based pricing model. The new Microsoft 365 Copilot Chat option represents an alternative to the Microsoft 365 Copilot, which organizations have been able to pay for based on the number of employees with access to it.

The introduction shows Microsoft’s determination to popularize generative artificial intelligence software in the workplace. Several companies have adopted the Microsoft 365 Copilot since it became available for $30 per person per month in November 2023, but one group of analysts recently characterized the product push as “slow/underwhelming.”

Copilot Chat can be an on-ramp to Microsoft 365 Copilot, with a lower barrier to entry, Jared Spataro, Microsoft’s chief marketing officer for AI at work, said in a CNBC interview this week. Both offerings rely on artificial intelligence models from Microsoft-backed OpenAI.

Copilot Chat can fetch information from the web and summarize text in uploaded documents, and people using it can create agents that perform tasks in the background. It can enrich answers with information from customers’ files and third-party sources.

Unlike Microsoft 365 Copilot, Copilot Chat can’t be found in Office applications such as Word and Excel. People can reach Copilot Chat starting today in the Microsoft 365 Copilot app for Windows, Android and iOS. The app is formerly known as Microsoft 365 (Office). It’s also available from the web at m365copilot.com, a spokesperson said.

Some management teams have resisted paying Microsoft to give the 365 Copilot to thousands of employees because they weren’t sure how helpful it would be at the $30 monthly price. Costs will vary for the Copilot Chat depending on what employees do with it, but at least organizations won’t end up paying for nonuse.

“As one customer said to me, this model lets the business value prove itself,” Spataro said.

Microsoft tallies up charges for Copilot Chat based on the tally of “messages” that a client uses. Each “message” costs a penny, according to a blog post. Responses that draw on the client’s proprietary files cost 30 “messages” each. Every action that an agent takes on behalf of employees costs 25 “messages.”

“We’re talking a cent, 2 cents, 30 cents, and that is a very easy way for people to get started,” Spataro said.

Salesforce charges $2 per conversation for its Agentforce AI chat service, where employees can set up automated sales and customer service processes.

The number of people using Microsoft 365 Copilot every day more than doubled quarter over quarter, CEO Satya Nadella said in October, although he did not disclose how many were using it. But sign-ups have been mounting. UBS said in October that it had 50,000 Microsoft 365 Copilot licenses, and in November, Accenture committed to having 200,000 users of the tool.

Don’t miss these insights from CNBC PRO

OpenAI's Sam Altman: Microsoft partnership has been tremendously positive for both companies

Continue Reading

Technology

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Published

on

By

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Lemon8, a photo-sharing app by Bytedance, and RedNote, a Shanghai-based content-sharing platform, have seen a surge in popularity in the U.S. as “TikTok refugees” migrate to alternative platforms ahead of a potential ban. 

Now a law that could see TikTok shut down in the U.S. threatens to ensnare these Chinese social media apps, and others gaining traction as TikTok-alternatives, legal experts say. 

As of Wednesday, RedNote — known as Xiaohongshu in Chinawas the top free app on the U.S. iOS store, with Lemon8 taking the second spot. 

The U.S. Supreme Court is set to rule on the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, or PAFACA, that would lead to the TikTok app being banned in the U.S. if its Beijing-based owner, ByteDance, doesn’t divest it by Jan. 19.

While the legislation explicitly names TikTok and ByteDance, experts say its scope is broad and could open the door for Washington to target additional Chinese apps. 

“Chinese social media apps, including Lemon8 and RedNote, could also end up being banned under this law,” Tobin Marcus, head of U.S. policy and politics at New York-based research firm Wolfe Research, told CNBC. 

If the TikTok ban is upheld, it will be unlikely that the law will allow potential replacements to originate from China without some form of divestiture, experts told CNBC.

PAFACA automatically applies to Lemon8 as it’s a subsidiary of ByteDance, while RedNote could fall under the law if its monthly average user base in the U.S. continues to grow, said Marcus. 

The legislation prohibits distributing, maintaining, or providing internet hosting services to any “foreign adversary controlled application.” 

These applications include those connected to ByteDance or TikTok or a social media company that is controlled by a “foreign adversary” and has been determined to present a significant threat to national security.

The wording of the legislation is “quite expansive” and would give incoming president Donald Trump room to decide which entities constitute a significant threat to national security, said Carl Tobias, Williams Chair in Law at the University of Richmond. 

Xiaomeng Lu, Director of Geo‑technology at political risk consultancy Eurasia Group, told CNBC that the law will likely prevail, even if its implementation and enforcement are delayed. Regardless, she expects Chinese apps in the U.S. will continue to be the subject of increased regulatory action moving forward.

“The TikTok case has set a new precedent for Chinese apps to get targeted and potentially shut down,” Lu said.

She added that other Chinese apps that could be impacted by increased scrutiny this year include popular Chinese e-commerce platform Temu and Shein. U.S. officials have accused the apps of posing data risks, allegations similar to those levied against TikTok.

The fate of TikTok rests with Supreme Court after the platform and its parent company filed a suit against the U.S. government, saying that invoking PAFACA violated constitutional protections of free speech.

TikTok’s argument is that the law is unconstitutional as applied to them specifically, not that it is unconstitutional per se, said Cornell Law Professor Gautam Hans. “So, regardless of whether TikTok wins or loses, the law could still potentially be applied to other companies,” he said. 

The law’s defined purview is broad enough that it could be applied to a variety of Chinese apps deemed to be a national security threat, beyond traditional social media apps in the mold of TikTok, Hans said. 

Trump, meanwhile, has urged the U.S. Supreme Court to hold off on implementing PAFACA so he can pursue a “political resolution” after taking office. Democratic lawmakers have also urged Congress and President Joe Biden to extend the Jan. 19 deadline

Continue Reading

Technology

Nvidia-backed AI video platform Synthesia doubles valuation to $2.1 billion

Published

on

By

Nvidia-backed AI video platform Synthesia doubles valuation to .1 billion

Synthesia is a platform that lets users create AI-generated clips with human avatars that can speak in multiple languages.

Synthesia

LONDON — Synthesia, a video platform that uses artificial intelligence to generate clips featuring multilingual human avatars, has raised $180 million in an investment round valuing the startup at $2.1 billion.

That’s more than than double the $1 billion Synthesia was worth in its last financing in 2023.

The London-based startup said Wednesday that the funding round was led by venture firm NEA with participation from Atlassian Ventures, World Innovation Lab and PSP Growth.

NEA counts Uber and TikTok parent company ByteDance among its portfolio companies. Synthesia is also backed by chip giant Nvidia.

Victor Riparbelli, CEO of Synthesia, told CNBC that investors appraised the businesses differently from other companies in the space due to its focus on “utility.”

“Of course, the hype cycle is beneficial to us,” Riparbelli said in an interview. “For us, what’s important is building an actually good business.”

Synthesia isn’t “dependent” on venture capital — as opposed to companies like OpenAI, Anthropic and Mistral, Riparbelli added.

These startups have raised billions of dollars at eye-watering valuations while burning through sizable amounts of money to train and develop their foundational AI models.

Read more CNBC reporting on AI

Synthesia’s not the only startup shaking up the world of video production with AI. Other startups offer solutions for producing and editing video content with AI, like Veed.io and Runway.

Meanwhile, the likes of OpenAI and Adobe have also developed generative AI tools for video creation.

Eric Liaw, a London-based partner at VC firm IVP, told CNBC that companies at the application layer of AI haven’t garnered as much investor hype as firms in the infrastructure layer.

“The amount of money that the application layer companies need to raise isn’t as large — and therefore the valuations aren’t necessarily as eye popping” as companies like Nvidia,” Liaw told CNBC last month.

Riparbelli said that money raised from the latest financing round would be used to invest in “more of the same,” furthering product development and investing more into security and compliance.

Last year, Synthesia made a series of updates to its platform, including the ability to produce AI avatars using a laptop webcam or phone, full-body avatars with arms and hands and a screen recording tool that has an AI avatar guide users through what they’re viewing.

On the AI safety front, in October Synthesia conducted a public red team test for risks around online harms, which demonstrated how the firm’s compliance controls counter attempts to create non-consensual deepfakes of people or use its avatars to encourage suicide, adult content or gambling.

The National Institute of Standards and Technology test was led by Rumman Chowdhury, a renowned data scientist who was formerly head of AI ethics at Twitter — before it became known as X under Elon Musk.

Riparbelli said that Synthesia is seeing increased interest from large enterprise customers, particularly in the U.S., thanks to its focus on security and compliance.

More than half of Synthesia’s annual revenue now comes from customers in the U.S., while Europe accounts for almost half.

Synthesia has also been ramping up hiring. The company recently tapped former Amazon executive Peter Hill as its chief technology officer. The company now employs over 400 people globally.

Synthesia’s announcement follows the unveiling of Prime Minister Keir Starmer’s 50-point plan to make the U.K. a global leader in AI.

U.K. Technology Minister Peter Kyle said the investment “showcases the confidence investors have in British tech” and “highlights the global leadership of U.K.-based companies in pioneering generative AI innovations.”

Continue Reading

Trending