Connect with us

Published

on

2024 is set up to be the biggest global election year in history. It coincides with the rapid rise in deepfakes. In APAC alone, there was a surge in deepfakes by 1530% from 2022 to 2023, according to a Sumsub report.

Fotografielink | Istock | Getty Images

Ahead of the Indonesian elections on Feb. 14, a video of late Indonesian president Suharto advocating for the political party he once presided over went viral. 

The AI-generated deepfake video that cloned his face and voice racked up 4.7 million views on X alone. 

This was not a one-off incident. 

In Pakistan, a deepfake of former prime minister Imran Khan emerged around the national elections, announcing his party was boycotting them. Meanwhile, in the U.S., New Hampshire voters heard a deepfake of President Joe Biden’s asking them to not vote in the presidential primary. 

Deepfakes of politicians are becoming increasingly common, especially with 2024 set up to be the biggest global election year in history. 

Reportedly, at least 60 countries and more than four billion people will be voting for their leaders and representatives this year, which makes deepfakes a matter of serious concern.

Rise of election deepfake risks

According to a Sumsub report in November, the number of deepfakes across the world rose by 10 times from 2022 to 2023. In APAC alone, deepfakes surged by 1,530% during the same period.

Online media, including social platforms and digital advertising, saw the biggest rise in identity fraud rate at 274% between 2021 and 2023. Professional services, healthcare, transportation and video gaming were were also among industries impacted by identity fraud.

Asia is not ready to tackle deepfakes in elections in terms of regulation, technology, and education, said Simon Chesterman, senior director of AI governance at AI Singapore. 

In its 2024 Global Threat Report, cybersecurity firm Crowdstrike reported that with the number of elections scheduled this year, nation-state actors including from China, Russia and Iran are highly likely to conduct misinformation or disinformation campaigns to sow disruption. 

“The more serious interventions would be if a major power decides they want to disrupt a country’s election — that’s probably going to be more impactful than political parties playing around on the margins,” said Chesterman. 

Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there’s time to push it back in.

Simon Chesterman

Senior director AI Singapore

However, most deepfakes will still be generated by actors within the respective countries, he said. 

Carol Soon, principal research fellow and head of the society and culture department at the Institute of Policy Studies in Singapore, said domestic actors may include opposition parties and political opponents or extreme right wingers and left wingers.

Deepfake dangers

How easy is it to make a deepfake video?

Adam Meyers, head of counter adversary operations at CrowdStrike, said that deepfakes may also invoke confirmation bias in people: “Even if they know in their heart it’s not true, if it’s the message they want and something they want to believe in they’re not going to let that go.”  

Chesterman also said that fake footage which shows misconduct during an election such as ballot stuffing, could cause people to lose faith in the validity of an election.

On the flip side, candidates may deny the truth about themselves that may be negative or unflattering and attribute that to deepfakes instead, Soon said. 

Deepfakes in the 2024 election: What you need to know

Who should be responsible?

There is a realization now that more responsibility needs to be taken on by social media platforms because of the quasi-public role they play, said Chesterman. 

In February, 20 leading tech companies, including MicrosoftMetaGoogleAmazonIBM as well as Artificial intelligence startup OpenAI and social media companies such as Snap, TikTok and X announced a joint commitment to combat the deceptive use of AI in elections this year. 

The tech accord signed is an important first step, said Soon, but its effectiveness will depend on implementation and enforcement. With tech companies adopting different measures across their platforms, a multi-prong approach is needed, she said. 

Tech companies will also have to be very transparent about the kinds of decisions that are made, for example, the kinds of processes that are put in place, Soon added. 

But Chesterman said it is also unreasonable to expect private companies to carry out what are essentially public functions. Deciding what content to allow on social media is a hard call to make, and companies may take months to decide, he said. 

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them

“We should not just be relying on the good intentions of these companies,” Chesterman added. “That’s why regulations need to be established and expectations need to be set for these companies.”

Towards this end, Coalition for Content Provenance and Authenticity (C2PA), a non-profit, has introduced digital credentials for content, which will show viewers verified information such as the creator’s information, where and when it was created, as well as whether generative AI was used to create the material.

C2PA member companies include Adobe, Microsoft, Google and Intel.

OpenAI has announced it will be implementing C2PA content credentials to images created with its DALL·E 3 offering early this year.

“I think it’d be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ Like, we’re gonna have to watch this relatively closely this year [with] super tight monitoring [and] super tight feedback.”

Sam Altman

CEO OpenAI

In a Bloomberg House interview at the World Economic Forum in January, OpenAI founder and CEO Sam Altman said the company was “quite focused” on ensuring its technology wasn’t being used to manipulate elections.

“I think our role is very different than the role of a distribution platform” like a social media site or news publisher, he said. “We have to work with them, so it’s like you generate here and you distribute here. And there needs to be a good conversation between them.”

Meyers suggested creating a bipartisan, non-profit technical entity with the sole mission of analyzing and identifying deepfakes.

“The public can then send them content they suspect is manipulated,” he said. “It’s not foolproof but at least there’s some sort of mechanism people can rely on.”

But ultimately, while technology is part of the solution, a large part of it comes down to consumers, who are still not ready, said Chesterman. 

Soon also highlighted the importance of educating the public. 

“We need to continue outreach and engagement efforts to heighten the sense of vigilance and consciousness when the public comes across information,” she said. 

The public needs to be more vigilant; besides fact checking when something is highly suspicious, users also need to fact check critical pieces of information especially before sharing it with others, she said. 

“There’s something for everyone to do,” Soon said. “It’s all hands on deck.”

— CNBC’s MacKenzie Sigalos and Ryan Browne contributed to this report.

Continue Reading

Technology

Elon Musk’s Neuralink filed as ‘disadvantaged business’ before being valued at $9 billion

Published

on

By

Elon Musk's Neuralink filed as 'disadvantaged business' before being valued at  billion

Jonathan Raa | Nurphoto | Getty Images

Elon Musk’s health tech company Neuralink labeled itself a “small disadvantaged business” in a federal filing with the U.S. Small Business Administration, shortly before a financing round valued the company at $9 billion.

Neuralink is developing a brain-computer interface (BCI) system, with an initial aim to help people with severe paralysis regain some independence. BCI technology broadly can translate a person’s brain signals into commands that allow them to manipulate external technologies just by thinking.

Neuralink’s filing, dated April 24, would have reached the SBA at a time when Musk was leading the Trump administration’s Department of Government Efficiency. At DOGE, Musk worked to slash the size of federal agencies.

MuskWatch first reported on the details Neuralink’s April filing.

According to the SBA’s website, a designation of SDB means a company is at least 51% owned and controlled by one or more “disadvantaged” persons who must be “socially disadvantaged and economically disadvantaged.” An SDB designation can also help a business “gain preferential access to federal procurement opportunities,” the SBA website says. 

Musk, the world’s wealthiest person, is CEO of Tesla and SpaceX, in addition to his other businesses like artificial intelligence startup xAI and tunneling venture The Boring Company. In 2022, Musk led the $44 billion purchase of Twitter, which he later named X before merging it with xAI.

Jared Birchall, a Neuralink executive, was listed as the contact person on the filing from April. Birchall, who also manages Musk’s money as head of his family office, didn’t immediately respond to a request for comment.

Neuralink, which incorporated in Nevada, closed a $650 million funding round in early June at a $9 billion valuation. ARK Invest, Peter Thiel’s Founders Fund, Sequoia Capital and Thrive Capital were among the investors. Neuralink said the fresh capital would help the company bring its technology to more patients and develop new devices that “deepen the connection between biological and artificial intelligence.”

Under Musk’s leadership at DOGE, the initiative took aim at government agencies that emphasized diversity, equity and inclusion (DEI). In February, for example, DOGE and Musk boasted of nixing hundreds of millions of dollars worth of funding for the Department of Education that would have gone towards DEI-related training grants.

WATCH: DOGE cuts face congressional test

DOGE cuts face congressional test. Here's a breakdown

Continue Reading

Technology

Defense manufacturing startup Hadrian closes $260 million funding round led by Peter Thiel’s Founders Fund

Published

on

By

Defense manufacturing startup Hadrian closes 0 million funding round led by Peter Thiel's Founders Fund

Startup Hadrian raises $260 million to expand its AI-powered factories to meet soaring demand

Defense manufacturing startup Hadrian on Thursday announced the closing of $260 million Series C funding round led by Peter Thiel‘s Founders Fund and Lux Capital.

The machine parts company said it will use the funding to build a new 270,000 square foot factory in Mesa, Arizona, and expand its Torrance, California, location as it looks to beef up its shipbuilding and naval defense capabilities.

“What we really need in this country is this quantum leap above China’s manufacturing model,” said CEO Chris Power in an interview with CNBC’s Morgan Brennan. “It’s about supercharging the worker versus replacing them.”

Defense tech startups like Hadrian are disrupting the mainstay defense contracting industry, which is led by leaders such as Northrop Grumman and Lockheed Martin, and battling it out to boost U.S. defense production while scooping up Department of Defense contracts.

An overall view of the manufacturing line in a Hadrian Automation Inc. factory.

Courtesy: Hadrian Automation, Inc.

Hadrian said the Arizona space will be four times the size of its California facility and start operations by Christmas. The factory will create 350 local jobs. The Hawthrone, California-based company said it is working on four to five new facilities to support production over the next year to support Department of Defense needs.

Read more CNBC tech news

Hadrian said it uses robotics and artificial intelligence to automate factories that can “supercharge American workers.”

Power said demand is rapidly growing, but the lack of U.S.-based talent is a major hurdle to building American dominance in shipbuilding and submarines.

Using its tools, the company said it can train workers within 30 days, making them 10 times more productive. Its workforce includes ex-marines and former nurses who have never set foot in a factory.

An overall view of the manufacturing line in a Hadrian Automation Inc. factory.

Courtesy: Hadrian Automation, Inc.

“We have to do a lot more … but certainly we’re able to keep up with the scale right now, and grateful to our team and customers for letting us go and do that,” he said. “As a country, we have to treat this like a national security crisis, not just the economics of manufacturing.”

The fresh raise also includes investments from Andreessen Horowitz and new stakeholders such as Brad Gerstner’s Altimeter Capital.

The company closed a $92 million funding round in late 2023.

WATCH: Startup Hadrian raises $260 million to expand its AI-powered factories to meet soaring demand

An overall view of the manufacturing line in a Hadrian Automation Inc. factory.

Courtesy: Hadrian Automation, Inc.

The Kuka arm is seen at a Hadrian Automation Inc. factory.

Courtesy: Hadrian Automation, Inc.

Continue Reading

Technology

Amazon cuts some jobs in cloud computing unit as layoffs continue

Published

on

By

Amazon cuts some jobs in cloud computing unit as layoffs continue

Attendees walk through an exposition hall at AWS re:Invent, a conference hosted by Amazon Web Services, in Las Vegas on Dec. 3, 2024.

Noah Berger | Getty Images

Amazon is laying off some staffers in its cloud computing division, the company confirmed on Thursday.

“After a thorough review of our organization, our priorities, and what we need to focus on going forward, we’ve made the difficult business decision to eliminate some roles across particular teams in AWS,” Amazon spokesperson Brad Glasser said in a statement. “We didn’t make these decisions lightly, and we’re committed to supporting the employees throughout their transition.”

The company declined to say which units within Amazon Web Services were impacted, or how many employees will be let go as a result of the job cuts.

Reuters was first to report on the layoffs.

In May, Amazon reported a third straight quarterly revenue miss at AWS. Sales increased 17% to $29.27 billion in the first quarter, slowing from 18.9% in the prior period.

Amazon said the cuts weren’t primarily due to investments in artificial intelligence, but are a result of ongoing efforts to streamline the workforce and refocus on certain priorities. The company said it continues to hire within AWS.

Amazon CEO Andy Jassy has been on a cost-cutting mission for the past several years, which has resulted in more than 27,000 employees being let go since 2022. Job reductions have continued this year, though at a smaller scale than preceding years. Amazon’s stores, communications and devices and services divisions have been hit with layoffs in recent months.

AWS last year cut hundreds of jobs in its physical stores technology and sales and marketing units.

Last month, Jassy predicted that Amazon’s corporate workforce could shrink even further as a result of the company embracing generative AI.

“We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” Jassy told staffers. “It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce.”

WATCH: Amazon CEO says AI will change the workforce

AI will change the workforce, says Amazon CEO Andy Jassy

Continue Reading

Trending