2024 is set up to be the biggest global election year in history. It coincides with the rapid rise in deepfakes. In APAC alone, there was a surge in deepfakes by 1530% from 2022 to 2023, according to a Sumsub report.
Fotografielink | Istock | Getty Images
Cybersecurity experts fear artificial intelligence-generated content has the potential to distort our perception of reality — a concern that is more troubling in a year filled with critical elections.
But one top expert is going against the grain, suggesting instead that the threat deep fakes pose to democracy may be “overblown.”
Martin Lee, technical lead for Cisco’s Talos security intelligence and research group, told CNBC he thinks that deepfakes — though a powerful technology in their own right — aren’t as impactful as fake news is.
However, new generative AI tools do “threaten to make the generation of fake content easier,” he added.
AI-generated material can often contain commonly identifiable indicators to suggest that it’s not been produced by a real person.
Visual content, in particular, has proven vulnerable to flaws. For example, AI-generated images can contain visual anomalies, such as a person with more than two hands, or a limb that’s merged into the background of the image.
It can be tougher to decipher between synthetically-generated voice audio and voice clips of real people. But AI is still only as good as its training data, experts say.
“Nevertheless, machine generated content can often be detected as such when viewed objectively. In any case, it is unlikely that the generation of content is limiting attackers,” Lee said.
Matt Calkins, CEO of enterprise tech firm Appian, which helps businesses make apps more easily with software tools, said AI has a “limited usefulness.”
A lot of today’s generative AI tools can be “boring,” he added. “Once it knows you, it can go from amazing to useful [but] it just can’t get across that line right now.”
“Once we’re willing to trust AI with knowledge of ourselves, it’s going to be truly incredible,” Calkins told CNBC in an interview this week.
That could make it a more effective — and dangerous — disinformation tool in future, Calkins warned, adding he’s unhappy with the progress being made on efforts to regulate the technology stateside.
It might take AI producing something egregiously “offensive” for U.S. lawmakers to act, he added. “Give us a year. Wait until AI offends us. And then maybe we’ll make the right decision,” Calkins said. “Democracies are reactive institutions,” he said.
No matter how advanced AI gets, though, Cisco’s Lee says there are some tried and tested ways to spot misinformation — whether it’s been made by a machine or a human.
“People need to know that these attacks are happening and mindful of the techniques that may be used. When encountering content that triggers our emotions, we should stop, pause, and ask ourselves if the information itself is even plausible, Lee suggested.
“Has it been published by a reputable source of media? Are other reputable media sources reporting the same thing?” he said. “If not, it’s probably a scam or disinformation campaign that should be ignored or reported.”
Digital illustration of a glowing world map with “AI” text across multiple continents, representing the global presence and integration of artificial intelligence.
Fotograzia | Moment | Getty Images
As artificial intelligence becomes more democratized, it is important for emerging economies to build their own “sovereign AI,” panelists told CNBC’s East Tech West conference in Bangkok, Thailand, on Friday.
In general, sovereign AI refers to a nation’s ability to control its own AI technologies, data and related infrastructure, ensuring strategic autonomy while meeting its unique priorities and security needs.
However, this sovereignty has been lacking, according to panelist Kasima Tharnpipitchai, head of AI strategy at SCB 10X, the technology investment arm of Thailand-based SCBX Group. He noted that many of the world’s most prominent large language models, operated by companies such as Anthropic and OpenAI, are based on the English language.
“The way you think, the way you interact with the world, the way you are when you speak another language can be very different,” Tharnpipitchai said.
It is, therefore, important for countries to take ownership of their AI systems, developing technology for specific languages, cultures, and countries, rather than just translating over English-based models.
Panelists agreed that the digitally savvy ASEAN region, with a total population of nearly 700 million people, is particularly well positioned to build its sovereign AI. People under the age of 35 make up around 61% of the population, and about 125,000 new users gain access to the internet daily.
Given this context, Jeff Johnson, managing director of ASEAN at Amazon Web Services, said, “I think it’s really important, and we’re really focused on how we can really democratize access to cloud and AI.”
Open-source models
According to panelists, one key way that countries can build up their sovereign AI environments is through the use of open-source AI models.
“There is plenty of amazing talent here in Southeast Asia and in Thailand, especially. To have that captured in a way that isn’t publicly accessible or ecosystem developing would feel like a shame,” said SCB 10X’s Tharnpipitchai.
Doing open-source is a way to create a “collective energy” to help Thailand better compete in AI and push sovereignty in a way that is beneficial for the entire country, he added.
Open-source generally refers to software in which the source code is made freely available, allowing anyone to view, modify and redistribute it. LLM players, such as China’s DeepSeek and Meta’s Llama, advertise their models as open-source, albeit with some restrictions.
The emergence of more open-source models offers companies and governments more options compared to relying on a few closed models, according to Cecily Ng, vice president and general manager of ASEAN & Greater China at software vendor Databricks.
AI experts have previously told CNBC that open-source AI has helped China boost AI adoption, better develop its AI ecosystem and compete with the U.S.
Access to computing
Prem Pavan, vice president and general manager of Southeast Asia and Korea at Red Hat, said that the localization of AI had been focused on language until recently. Having sovereign access to AI models powered by local hardware and computing is more important today, he added.
Panelists said that for emerging countries like Thailand, AI localization can be offered by cloud computing companies with domestic operations. These include global hyperscalers such as AWS, Microsoft Azure and Tencent Cloud, and sovereign players like AIS Cloud and True IDC.
“We’re here in Thailand and across Southeast Asia to support all industries, all businesses of all shapes and sizes, from the smallest startup to the largest enterprise,” said AWS’s Johnson.
He added that the economic model of the company’s cloud services makes it easy to “pay for what you use,” thus lowering the barriers to entry and making it very easy to build models and applications.
In April, the U.N. Trade and Development Agency said in a report that AI was projected to reach $4.8 trillion in market value by 2033. However, it warned that the technology’s benefits remain highly concentrated, with nations at risk of lagging behind.
Among UNCTAD’s recommendations to the international community for driving inclusive growth was shared AI infrastructure, the use of open-source AI models and initiatives to share AI knowledge and resources.
Amazon CEO Andy Jassy said the rapid rollout of generative artificial intelligence means the company will one day require fewer employees to do some of the work that computers can handle.
“Like with every technical transformation, there will be fewer people doing some of the jobs that the technology actually starts to automate,” Jassy told CNBC’s Jim Cramer in an interview on Monday. “But there’s going to be other jobs.”
Even as AI eliminates the need for some roles, Amazon will continue to hire more employees in AI, robotics and elsewhere, Jassy said.
Earlier this month, Jassy admitted that he expects the company’s workforce to decline in the next few years as Amazon embraces generative AI and AI-powered software agents. He told staffers in a memo that it will be “hard to know exactly where this nets out over time” but that the corporate workforce will shrink as Amazon wrings more efficiencies out of the technology.
It’s a message that’s making its way across the tech sector. Salesforce CEO Marc Benioff last week claimed AI is doing 30% to 50% of the work at his software vendor. Other companies such as Shopify and Microsoft have urged employees to adopt the technology in their daily work. The CEO of Klarna said in May that the online lender has managed to shrink its headcount by about 40%, in part due to investments in AI and natural attrition in its workforce.
Jassy said on Monday that AI will free employees from “rote work” and “make all our jobs more interesting,” while enabling staffers to invent better services more quickly than before.
Amazon and other tech companies have also been shrinking their workforces through rolling layoffs over the past several years. Amazon has cut more than 27,000 jobs since the start of 2022, and it’s announced smaller, more targeted layoffs in its retail and devices units in recent months.
Amazon shares are flat so far this year, underperforming the Nasdaq, which has gained 5.5%. The stock is about 10% below its record reached in February, while fellow megacaps Meta, Microsoft and Nvidia are all trading at or very near record highs.
Traders work on the floor at the New York Stock Exchange (NYSE), on the day of Circle Internet Group’s IPO, in New York City, U.S., June 5, 2025.
Brendan McDermid | Reuters
Stablecoin issuer Circle Internet Group has applied for a national trust bank charter, moving forward on its mission to bring stablecoins into the traditional financial world after the firm’s big market debut this month, CNBC confirmed.
Shares rose 1% after hours.
If the Office of the Comptroller of the Currency grants the bank charter, Circle will establish the First National Digital Currency Bank, N.A. Under the charter, Circle, which issues the USDC stablecoin, will also be able to offer custody services in the future to institutional clients for assets, which could include representations of stocks and bonds on a blockchain network.
Reuters first reported on Circle’s bank charter application.
There are no plans to change the management of Circle’s USDC reserves, which are currently held with other major banks.
Circle’s move comes after a wildly successful IPO and debut trading month on the public markets. Shares of the company are up 484% in June. The company is also benefiting from a wave of optimism after the Senate’s passage of the GENIUS Act, which would give the U.S. a regulatory framework for stablecoins.
Having a federally regulated trust charter would also help Circle meet requirements under the GENIUS Act.
“Establishing a national digital currency trust bank of this kind marks a significant milestone in our goal to build an internet financial system that is transparent, efficient and accessible,” Circle CEO Jeremy Allaire said in a statement shared with CNBC. “By applying for a national trust charter, Circle is taking proactive steps to further strengthen our USDC infrastructure.”
“Further, we will align with emerging U.S. regulation for the issuance and operation of dollar-denominated payment stablecoins, which we believe can enhance the reach and resilience of the U.S. dollar, and support the development of crucial, market neutral infrastructure for the world’s leading institutions to build on,” he said.
Don’t miss these cryptocurrency insights from CNBC Pro: