Connect with us

Published

on

Disinformation is expected to be among the top cyber risks for elections in 2024.

Andrew Brookes | Image Source | Getty Images

Britain is expected to face a barrage of state-backed cyber attacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC. 

Brits will vote on May 2 in local elections, and a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has not yet committed to a date.

The votes come as the country faces a range of problems including a cost-of-living crisis and stark divisions over immigration and asylum.

“With most U.K. citizens voting at polling stations on the day of the election, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” Todd McKinnon, CEO of identity security firm Okta, told CNBC via email. 

It wouldn’t be the first time.

In 2016, the U.S. presidential election and U.K. Brexit vote were both found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state-affiliated groups, although Moscow denies these claims.

State actors have since made routine attacks in various countries to manipulate the outcome of elections, according to cyber experts. 

Meanwhile, last week, the U.K. alleged that Chinese state-affiliated hacking group APT 31 attempted to access U.K. lawmakers’ email accounts, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a technology firm in Wuhan believed to be a front for APT 31.

The U.S., Australia, and New Zealand followed with their own sanctions. China denied allegations of state-sponsored hacking, calling them “groundless.”

Cybercriminals utilizing AI 

Cybersecurity experts expect malicious actors to interfere in the upcoming elections in several ways — not least through disinformation, which is expected to be even worse this year due to the widespread use of artificial intelligence. 

Synthetic images, videos and audio generated using computer graphics, simulation methods and AI — commonly referred to as “deep fakes” — will be a common occurrence as it becomes easier for people to create them, say experts.  

State-backed cyber attacks are on the rise this year: DXC Technology

“Nation-state actors and cybercriminals are likely to utilize AI-powered identity-based attacks like phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions,” Okta’s McKinnon added.  

“We’re also sure to see an influx of AI and bot-driven content generated by threat actors to push out misinformation at an even greater scale than we’ve seen in previous election cycles.”

The cybersecurity community has called for heightened awareness of this type of AI-generated misinformation, as well as international cooperation to mitigate the risk of such malicious activity. 

Top election risk

Adam Meyers, head of counter adversary operations for cybersecurity firm CrowdStrike, said AI-powered disinformation is a top risk for elections in 2024. 

“Right now, generative AI can be used for harm or for good and so we see both applications every day increasingly adopted,” Meyers told CNBC. 

China, Russia and Iran are highly likely to conduct misinformation and disinformation operations against various global elections with the help of tools like generative AI, according to Crowdstrike’s latest annual threat report.  

“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation states like Russia or China or Iran can leverage generative AI and some of the newer technology to craft messages and to use deep fakes to create a story or a narrative that is compelling for people to accept, especially when people already have this kind of confirmation bias, it’s extremely dangerous.”

A key problem is that AI is reducing the barrier to entry for criminals looking to exploit people online. This has already happened in the form of scam emails that have been crafted using easily accessible AI tools like ChatGPT. 

Hackers are also developing more advanced — and personal — attacks by training AI models on our own data available on social media, according to Dan Holmes, a fraud prevention specialist at regulatory technology firm Feedzai.

“You can train those voice AI models very easily … through exposure to social [media],” Holmes told CNBC in an interview. “It’s [about] getting that emotional level of engagement and really coming up with something creative.”

In the context of elections, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labour Party, abusing party staffers was posted to the social media platform X in October 2023. The post racked up as many as 1.5 million views, according to fact correction charity Full Fact.

It’s just one example of many deepfakes that have cybersecurity experts worried about what’s to come as the U.K. approaches elections later this year.

Elections a test for tech giants

Measures to tackle cyber threat may be implemented before midterms: Analyst

Deep fake technology is becoming a lot more advanced, however. And for many tech companies, the race to beat them is now about fighting fire with fire. 

“Deepfakes went from being a theoretical thing to being very much live in production today,” Mike Tuchen, CEO of Onfido, told CNBC in an interview last year. 

“There’s a cat and mouse game now where it’s ‘AI vs. AI’ — using AI to detect deepfakes and mitigating the impact for our customers is the big battle right now.” 

Cyber experts say it’s becoming harder to tell what’s real — but there can be some signs that content is digitally manipulated. 

AI uses prompts to generate text, images and video, but it doesn’t always get it right. So for example, if you’re watching an AI-generated video of a dinner, and the spoon suddenly disappears, that’s an example of an AI flaw. 

“We’ll certainly see more deepfakes throughout the election process but an easy step we can all take is verifying the authenticity of something before we share it,” Okta’s McKinnon added.

Continue Reading

Technology

Nvidia’s new software could help trace where its AI chips end up

Published

on

By

Nvidia’s new software could help trace where its AI chips end up

Cfoto | Future Publishing | Getty Images

Nvidia is developing software that could provide location verification for its AI graphics processing units (GPUs), a move that comes as Washington ramps up efforts to prevent restricted chips from being used in countries like China.

The opt-in service uses a client software agent that Nvidia chip customers can install to monitor the health of their AI GPUs, the company said in a blog post on Wednesday

Nvidia also said that customers “will be able to visualize their GPU fleet utilization in a dashboard, globally or by compute zones — groups of nodes enrolled in the same physical or cloud locations.”

However, Nvidia told CNBC in a statement that the latest software does not give the company or outside actors the ability to disable its chips.

“There is no kill switch,” it added. “For GPU health, there are no features that allow NVIDIA to remotely control or take action on registered systems. It is readonly telemetry sent to NVIDIA.”

Telemetry is the automated process of collecting and transmitting data from remote or inaccessible sources to a central location for monitoring, analysis and optimization.

The ability to locate a device depends on the type of sensor data collected and transmitted, such as IP-based network information, timestamps, or other system-level signals that can be mapped to physical or cloud locations.

A screenshot of the software posted on Nvidia’s blog showed details such as the machine’s IP address and location.

A screenshot of the software posted on Nvidia’s blog showed details such as the machine’s IP address and location.

Nvidia blog screenshot | Opt-In NVIDIA Software Enables Data Center Fleet Management

Lukasz Olejnik, a senior research fellow at the Department of War Studies, King’s College London, said that while Nvidia indicated that its GPUs do not have hardware tracking technology, the blog did not specify if the data “uses customer input, network data, cloud provider metadata, or other methods.”

“In principle, also, the sent data contains metadata like network address, which may enable location in practice,” Olejnik, who is also an independent consultant, told CNBC.

The software could also detect any unexpected usage patterns that differ from what was declared, he added.

The latest features from Nvidia follow calls by lawmakers in Washington for the company to outfit its chips with tracking software that could help enforce export controls. 

Those rules bar Nvidia from selling its more advanced AI chips to companies in China and other prohibited locations without a special license. While Trump has recently said he plans to roll back some of these export restrictions, those on Nvidia’s cutting-edge chips will remain in place.  

In May, Senator Tom Cotton and a bipartisan group of eight lawmakers introduced the Chip Security Act, which, if passed, would mandate security mechanisms and location verification in advanced AI chips. 

“Firms affected by U.S. export controls or China-related restrictions could use the system to verify and prove their GPU fleets remain in approved locations and state, and demonstrate compliant usage to regulators,” Olejn noted.

“That could actually help in compliance and indirectly on investment outlook positively.”

Pressure on Nvidia has intensified after Justice Department investigations into alleged smuggling rings that moved over $160 million in Nvidia chips to China.

However, Chinese officials have pushed back, warning Nvidia against equipping its chips with tracking features, as well as “potential backdoors and vulnerabilities.” 

Following a national security investigation into some of Nvidia’s chips to check for these backdoors, Chinese officials have prevented local tech companies from purchasing products from the American chip designer. 

Despite a green light from U.S. President Donald Trump for Nvidia to ship its previously restricted H200 chips to China, Beijing is reportedly undecided about whether to permit the imports.

Continue Reading

Technology

Oracle shares plummet 11% in premarket, dragging down AI stocks

Published

on

By

Oracle shares plummet 11% in premarket, dragging down AI stocks

Oracle shares plummeted 11% in premarket trading on Thursday, extending yesterday’s losses after the firm reported disappointing results.

The cloud computing and database software maker reported lower-than-expected quarterly revenue on Wednesday, despite booming demand for its artificial intelligence infrastructure. Its revenue came in at $16.06 billion, compared with $16.21 billion expected by analysts, according to data compiled by LSEG.

It dragged other AI-related names down with it. Chip darling Nvidia was last seen down 1.5% in premarket trading, memory and storage firm Micron was 1.4% lower, tech heavyweight Microsoft dipped 0.9%, cloud company Coreweave slid 3% and AMD was 1.3% in negative territory.

Oracle shares drop sharply on mixed results

Oracle has been the subject of much market chatter since raising $18 billion in a jumbo bond sale in September, marking one of the largest debt issuances for the tech industry on record. The name shot onto investor agendas when it inked a $300 billion deal with OpenAI in the same month. Oracle made further moves into cloud infrastructure, where it battles Big Tech names such as AmazonMicrosoft and Google for AI contracts.

Global investors have questioned Oracle’s aggressive AI infrastructure build-out plans and whether it needs such a colossal amount of debt to execute, though other tech firms have also recently issued corporate bonds.

Oracle specifically has secured billions of dollars of construction loans through a consortium of banks tied to data centers in New Mexico and Wisconsin. The firm will raise roughly $20 billion to $30 billion in debt every year for the next three years, according to estimates by Citi analyst Tyler Radke.

Its share price has moved 34% higher year-to-date despite recent losses.

Continue Reading

Technology

Google’s AI unit DeepMind announces its first ‘automated research lab’ in the UK

Published

on

By

Google’s AI unit DeepMind announces its first 'automated research lab' in the UK

Google DeepMind, the tech giant’s AI unit, unveiled plans for its first “automated research lab” in the U.K. as it signs a partnership that could lead to the company deploying its latest models in the country. 

The AI company will open the lab, which will use AI and robotics to run experiments, in the U.K. next year. It will focus on developing new superconductor materials, which can be used to develop medical imaging tech, alongside new materials for semiconductors.

British scientists will gain “priority access” to some of the world’s most advanced AI tools under the partnership, the U.K. government said in its announcement.

Founded in London in 2010 by Nobel prize winner Demis Hassabis, DeepMind was acquired by Google in 2014, but has retained a large operational base in the U.K. The company has made several breakthroughs considered crucial to advancing AI technology.

The partnership could also lead to DeepMind working with the government on AI research in areas like nuclear fusion and deploying its Gemini models across government and education in the U.K, the government said.

“DeepMind serves as the perfect example of what UK-US tech collaboration can deliver – a firm with roots on both sides of the Atlantic backing British innovators to shape the curve of technological progress,” said U.K. Technology Secretary Liz Kendall in a statement.

“This agreement could help to unlock cleaner energy, smarter public services, and new opportunities which will benefit communities up and down the country,” she said.

Microsoft poaches more Google DeepMind AI talent as AI talent wars continue

“AI has incredible potential to drive a new era of scientific discovery and improve everyday life,” said Hassabis.

“We’re excited to deepen our collaboration with the UK government and build on the country’s rich heritage of innovation to advance science, strengthen security, and deliver tangible improvements for citizens.”

The U.K. has been racing to sign deals with major tech companies as it tries to build out its AI infrastructure and public deployment of the technology, since the publication of a national strategy for AI in January.

Microsoft, Nvidia, Google and OpenAI announced plans to funnel over $40 billion of investment into new AI infrastructure in the country in September, during a state visit by U.S. President Donald Trump.

Continue Reading

Trending