Connect with us

Published

on

Disinformation is expected to be among the top cyber risks for elections in 2024.

Andrew Brookes | Image Source | Getty Images

Britain is expected to face a barrage of state-backed cyber attacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence is a key risk, according to cyber experts who spoke to CNBC. 

Brits will vote on May 2 in local elections, and a general election is expected in the second half of this year, although British Prime Minister Rishi Sunak has not yet committed to a date.

The votes come as the country faces a range of problems including a cost-of-living crisis and stark divisions over immigration and asylum.

“With most U.K. citizens voting at polling stations on the day of the election, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” Todd McKinnon, CEO of identity security firm Okta, told CNBC via email. 

It wouldn’t be the first time.

In 2016, the U.S. presidential election and U.K. Brexit vote were both found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state-affiliated groups, although Moscow denies these claims.

State actors have since made routine attacks in various countries to manipulate the outcome of elections, according to cyber experts. 

Meanwhile, last week, the U.K. alleged that Chinese state-affiliated hacking group APT 31 attempted to access U.K. lawmakers’ email accounts, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a technology firm in Wuhan believed to be a front for APT 31.

The U.S., Australia, and New Zealand followed with their own sanctions. China denied allegations of state-sponsored hacking, calling them “groundless.”

Cybercriminals utilizing AI 

Cybersecurity experts expect malicious actors to interfere in the upcoming elections in several ways — not least through disinformation, which is expected to be even worse this year due to the widespread use of artificial intelligence. 

Synthetic images, videos and audio generated using computer graphics, simulation methods and AI — commonly referred to as “deep fakes” — will be a common occurrence as it becomes easier for people to create them, say experts.  

State-backed cyber attacks are on the rise this year: DXC Technology

“Nation-state actors and cybercriminals are likely to utilize AI-powered identity-based attacks like phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related institutions,” Okta’s McKinnon added.  

“We’re also sure to see an influx of AI and bot-driven content generated by threat actors to push out misinformation at an even greater scale than we’ve seen in previous election cycles.”

The cybersecurity community has called for heightened awareness of this type of AI-generated misinformation, as well as international cooperation to mitigate the risk of such malicious activity. 

Top election risk

Adam Meyers, head of counter adversary operations for cybersecurity firm CrowdStrike, said AI-powered disinformation is a top risk for elections in 2024. 

“Right now, generative AI can be used for harm or for good and so we see both applications every day increasingly adopted,” Meyers told CNBC. 

China, Russia and Iran are highly likely to conduct misinformation and disinformation operations against various global elections with the help of tools like generative AI, according to Crowdstrike’s latest annual threat report.  

“This democratic process is extremely fragile,” Meyers told CNBC. “When you start looking at how hostile nation states like Russia or China or Iran can leverage generative AI and some of the newer technology to craft messages and to use deep fakes to create a story or a narrative that is compelling for people to accept, especially when people already have this kind of confirmation bias, it’s extremely dangerous.”

A key problem is that AI is reducing the barrier to entry for criminals looking to exploit people online. This has already happened in the form of scam emails that have been crafted using easily accessible AI tools like ChatGPT. 

Hackers are also developing more advanced — and personal — attacks by training AI models on our own data available on social media, according to Dan Holmes, a fraud prevention specialist at regulatory technology firm Feedzai.

“You can train those voice AI models very easily … through exposure to social [media],” Holmes told CNBC in an interview. “It’s [about] getting that emotional level of engagement and really coming up with something creative.”

In the context of elections, a fake AI-generated audio clip of Keir Starmer, leader of the opposition Labour Party, abusing party staffers was posted to the social media platform X in October 2023. The post racked up as many as 1.5 million views, according to fact correction charity Full Fact.

It’s just one example of many deepfakes that have cybersecurity experts worried about what’s to come as the U.K. approaches elections later this year.

Elections a test for tech giants

Measures to tackle cyber threat may be implemented before midterms: Analyst

Deep fake technology is becoming a lot more advanced, however. And for many tech companies, the race to beat them is now about fighting fire with fire. 

“Deepfakes went from being a theoretical thing to being very much live in production today,” Mike Tuchen, CEO of Onfido, told CNBC in an interview last year. 

“There’s a cat and mouse game now where it’s ‘AI vs. AI’ — using AI to detect deepfakes and mitigating the impact for our customers is the big battle right now.” 

Cyber experts say it’s becoming harder to tell what’s real — but there can be some signs that content is digitally manipulated. 

AI uses prompts to generate text, images and video, but it doesn’t always get it right. So for example, if you’re watching an AI-generated video of a dinner, and the spoon suddenly disappears, that’s an example of an AI flaw. 

“We’ll certainly see more deepfakes throughout the election process but an easy step we can all take is verifying the authenticity of something before we share it,” Okta’s McKinnon added.

Continue Reading

Technology

Oracle says there have been ‘no delays’ in OpenAI arrangement after stock slide

Published

on

By

Oracle says there have been 'no delays' in OpenAI arrangement after stock slide

Oracle CEO Clay Magouyrk appears on a media tour of the Stargate AI data center in Abilene, Texas, on Sept. 23, 2025.

Kyle Grillot | Bloomberg | Getty Images

Oracle on Friday pushed back against a report that said the company will complete data centers for OpenAI, one of its major customers, in 2028, rather than 2027.

The delay is due to a shortage of labor and materials, according to the Friday report from Bloomberg, which cited unnamed people. Oracle shares fell to a session low of $185.98, down 6.5% from Thursday’s close.

“Site selection and delivery timelines were established in close coordination with OpenAI following execution of the agreement and were jointly agreed,” an Oracle spokesperson said in an email to CNBC. “There have been no delays to any sites required to meet our contractual commitments, and all milestones remain on track.”

The Oracle spokesperson did not specify a timeline for turning on cloud computing infrastructure for OpenAI. In September, OpenAI said it had a partnership with Oracle worth more than $300 billion over the next five years.

“We have a good relationship with OpenAI,” Clay Magouyrk, one of Oracle’s two newly appointed CEOs, said at an October analyst meeting.

Doing business with OpenAI is relatively new to 48-year-old Oracle. Historically, Oracle grew through sales of its database software and business applications. Its cloud infrastructure business now contributes over one-fourth of revenue, although Oracle remains a smaller hyperscaler than Amazon, Microsoft and Google.

OpenAI has also made commitments to other companies as it looks to meet expected capacity needs.

In September, Nvidia said it had signed a letter of intent with OpenAI to deploy at least 10 gigawatts of Nvidia equipment for the San Francisco artificial intelligence startup. The first phase of that project is expected in the second half of 2026.

Nvidia and OpenAI said in a September statement that they “look forward to finalizing the details of this new phase of strategic partnership in the coming weeks.”

But no announcement has come yet.

In a November filing, Nvidia said “there is no assurance that we will enter into definitive agreements with respect to the OpenAI opportunity.”

OpenAI has historically relied on Nvidia graphics processing units to operate ChatGPT and other products, and now it’s also looking at designing custom chips in a collaboration with Broadcom.

On Thursday, Broadcom CEO Hock Tan laid out a timeline for the OpenAI work, which was announced in October. Broadcom and OpenAI said they had signed a term sheet.

“It’s more like 2027, 2028, 2029, 10 gigawatts, that was the OpenAI discussion,” Tan said on Broadcom’s earnings call. “And that’s, I call it, an agreement, an alignment of where we’re headed with respect to a very respected and valued customer, OpenAI. But we do not expect much in 2026.”

OpenAI declined to comment.

WATCH: Oracle says there have been ‘no delays’ in OpenAI arrangement after stock slide

Oracle says there have been 'no delays' in OpenAI arrangement after stock slide

Continue Reading

Technology

AI order from Trump might be ‘illegal,’ Democrats and consumer advocacy groups claim

Published

on

By

AI order from Trump might be ‘illegal,’ Democrats and consumer advocacy groups claim

“This is the wrong approach — and most likely illegal,” Sen. Amy Klobuchar, D-Minn., said in a post on X Thursday.

“We need a strong federal safety standard, but we should not remove the few protections Americans currently have from the downsides of AI,” Klobuchar said.

Trump’s executive order directs Attorney General Pam Bondi to create a task force to challenge state laws regulating AI.

The Commerce Department was also directed to identify “onerous” state regulations aimed at AI.

The order is a win for tech companies such as OpenAI and Google and the venture firm Andreessen Horowitz, which have all lobbied against state regulations they view as burdensome. 

It follows a push by some Republicans in Congress to impose a moratorium on state AI laws. A recent plan to tack on that moratorium to the National Defense Authorization Act was scuttled.

Collin McCune, head of government affairs at Andreessen Horowitz, celebrated Trump’s order, calling it “an important first step” to boost American competition and innovation. But McCune urged Congress to codify a national AI framework.

“States have an important role in addressing harms and protecting people, but they can’t provide the long-term clarity or national direction that only Congress can deliver,” McCune said in a statement.

Sriram Krishnan, a White House AI advisor and former general partner at Andreessen Horowitz, during an interview Friday on CNBC’s “Squawk Box,” said that Trump is was looking to partner with Congress to pass such legislation.

“The White House is now taking a firm stance where we want to push back on ‘doomer’ laws that exist in a bunch of states around the country,” Krishnan said.

He also said that the goal of the executive order is to give the White House tools to go after state laws that it believes make America less competitive, such as recently passed legislation in Democratic-led states like California and Colorado.

The White House will not use the executive order to target state laws that protect the safety of children, Krishnan said.

Robert Weissman, co-president of the consumer advocacy group Public Citizen, called Trump’s order “mostly bluster” and said the president “cannot unilaterally preempt state law.”

“We expect the EO to be challenged in court and defeated,” Weissman said in a statement. “In the meantime, states should continue their efforts to protect their residents from the mounting dangers of unregulated AI.”

Weissman said about the order, “This reward to Big Tech is a disgraceful invitation to reckless behavior
by the world’s largest corporations and a complete override of the federalist principles that Trump and MAGA claim to venerate.”

In the short term, the order could affect a handful of states that have already passed legislation targeting AI. The order says that states whose laws are considered onerous could lose federal funding.

One Colorado law, set to take effect in June, will require AI developers to protect consumers from reasonably foreseeable risks of algorithmic discrimination.

Some say Trump’s order will have no real impact on that law or other state regulations.

“I’m pretty much ignoring it, because an executive order cannot tell a state what to do,” said Colorado state Rep. Brianna Titone, a Democrat who co-sponsored the anti-discrimination law.

In California, Gov. Gavin Newsom recently signed a law that, starting in January, will require major AI companies to publicly disclose their safety protocols. 

That law’s author, state Sen. Scott Wiener, said that Trump’s stated goal of having the United States dominate the AI sector is undercut by his recent moves. 

“Of course, he just authorized chip sales to China & Saudi Arabia: the exact opposite of ensuring U.S. dominance,” Wiener wrote in an X post on Thursday night. The Bay Area Democrat is seeking to succeed Speaker-emerita Nancy Pelosi in the U.S. House of Representatives.

Trump on Monday said he will Nvidia to sell its advanced H200 chips to “approved customers” in China, provided that U.S. gets a 25% cut of revenues.

Continue Reading

Technology

Coinbase to soon unveil prediction markets powered by Kalshi, source says

Published

on

By

Coinbase to soon unveil prediction markets powered by Kalshi, source says

Feature China | Future Publishing | Getty Images

Coinbase is gearing up to launch an in-house prediction market, powered by Kalshi, a source close to the matter told CNBC — a strategic play to expand the number of asset classes available on the cryptocurrency exchange at a time some investors are shying away from digital assets.

The source said Coinbase and Kalshi will “soon” formally announce the prediction market, with news on the matter potentially coming as early as next week.

Rumblings of the prediction market launch have swirled for nearly a month. An alleged screenshot of Coinbase’s prediction markets dashboard shared by Silicon Valley researcher Jane Manchun Wong in an X post dated Nov. 18 offered some clues about the new product.

The Information first reported on Nov. 19 that Coinbase planned to launch prediction markets powered by Kalshi, adding that the exchange would unveil the new product at its “Coinbase System Update” event on Dec. 17. Bloomberg published a similar report on Thursday, citing a source familiar with the matter, adding that Coinbase would also announce a tokenized stock offering at the showcase. 

Coinbase declined to confirm the reports to CNBC, but said to tune into its event next week. The firm did not comment on a timeline for when its prediction markets would go live for its users.

Coinbase’s upcoming product launches underscore its push to refashion itself into an “everything exchange,” or a one-stop shop for trading all kinds of assets, including crypto tokens, tokenized stocks and event contracts. In May, CEO Brian Armstrong articulated that “everything exchange” vision to investors, saying Coinbase would aim to become a top financial services app within the next decade

The trading platform is setting its sights on that goal as it faces intensifying competition from rivals such as Robinhood, Gemini and Kraken. All three have launched tokenized equity offerings to users outside of the U.S. within the past year, in addition to exploring prediction markets to varying extents.

Coinbase’s moves to expand the financial instruments available to its users also come as investor sentiment on digital assets cools. A series of liquidations of highly leveraged digital asset positions in mid-October triggered several pullbacks in the crypto market, prompting investors to rotate out of tokens and into gold and other safe-have assets. 

Bitcoin fell as low as around $85,000 in early December, hitting its lowest level since last March. The token was last trading at $89,951, down 23% in the past three months. Coinbase has also fallen more than 16% over the past three months.

The deal also underscores U.S.-based prediction markets operator Kalshi’s push to embed its event contracts into various brokerages, widening its reach as the prediction markets space becomes increasingly competitive. 

This year, Kalshi embedded several of its prediction markets into trading platform Robinhood, as part of a non-exclusive partnership between the companies. Kalshi has also engaged in talks with several other major brokerages, including those in the crypto industry, with the aim of closing more deals like the ones it has struck with Robinhood and now Coinbase, a source familiar with the matter told CNBC.

Continue Reading

Trending