Connect with us

Published

on

2024 is set up to be the biggest global election year in history. It coincides with the rapid rise in deepfakes. In APAC alone, there was a surge in deepfakes by 1530% from 2022 to 2023, according to a Sumsub report.

Fotografielink | Istock | Getty Images

Cybersecurity experts fear artificial intelligence-generated content has the potential to distort our perception of reality — a concern that is more troubling in a year filled with critical elections.

But one top expert is going against the grain, suggesting instead that the threat deep fakes pose to democracy may be “overblown.”

Martin Lee, technical lead for Cisco’s Talos security intelligence and research group, told CNBC he thinks that deepfakes — though a powerful technology in their own right — aren’t as impactful as fake news is.

However, new generative AI tools do “threaten to make the generation of fake content easier,” he added.

AI-generated material can often contain commonly identifiable indicators to suggest that it’s not been produced by a real person.

Visual content, in particular, has proven vulnerable to flaws. For example, AI-generated images can contain visual anomalies, such as a person with more than two hands, or a limb that’s merged into the background of the image.

It can be tougher to decipher between synthetically-generated voice audio and voice clips of real people. But AI is still only as good as its training data, experts say.

“Nevertheless, machine generated content can often be detected as such when viewed objectively. In any case, it is unlikely that the generation of content is limiting attackers,” Lee said.

Experts have previously told CNBC that they expect AI-generated disinformation to be a key risk in upcoming elections around the world.

‘Limited usefulness’

Matt Calkins, CEO of enterprise tech firm Appian, which helps businesses make apps more easily with software tools, said AI has a “limited usefulness.”

A lot of today’s generative AI tools can be “boring,” he added. “Once it knows you, it can go from amazing to useful [but] it just can’t get across that line right now.”

“Once we’re willing to trust AI with knowledge of ourselves, it’s going to be truly incredible,” Calkins told CNBC in an interview this week.

That could make it a more effective — and dangerous — disinformation tool in future, Calkins warned, adding he’s unhappy with the progress being made on efforts to regulate the technology stateside.

It might take AI producing something egregiously “offensive” for U.S. lawmakers to act, he added. “Give us a year. Wait until AI offends us. And then maybe we’ll make the right decision,” Calkins said. “Democracies are reactive institutions,” he said.

No matter how advanced AI gets, though, Cisco’s Lee says there are some tried and tested ways to spot misinformation — whether it’s been made by a machine or a human.

“People need to know that these attacks are happening and mindful of the techniques that may be used. When encountering content that triggers our emotions, we should stop, pause, and ask ourselves if the information itself is even plausible, Lee suggested.

“Has it been published by a reputable source of media? Are other reputable media sources reporting the same thing?” he said. “If not, it’s probably a scam or disinformation campaign that should be ignored or reported.”

Continue Reading

Technology

Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’

Published

on

By

Figure AI sued by whistleblower who warned that startup's robots could 'fracture a human skull'

Startup Figure AI is developing general-purpose humanoid robots.

Figure AI

Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”

Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”

The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.

In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”

The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.

Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.

The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.

Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

Figure didn’t immediately respond to a request for comment. Nor did attorneys for Gruendel.

The humanoid robot market remains nascent today, with companies like Tesla and Boston Dynamics pursuing futuristic offerings, alongside Figure, while China’s Unitree Robotics is preparing for an IPO. Morgan Stanley said in a report in May that adoption is “likely to accelerate in the 2030s” and could top $5 trillion by 2050.

Read the filing here:

AI is turbocharging the evolution of humanoid robots, says Agility Robotics CEO

Continue Reading

Technology

Here are real AI stocks to invest in and speculative ones to avoid

Published

on

By

Here are real AI stocks to invest in and speculative ones to avoid

Continue Reading

Technology

The Street’s bad call on Palo Alto – plus, two portfolio stocks reach new highs

Published

on

By

The Street's bad call on Palo Alto – plus, two portfolio stocks reach new highs

Continue Reading

Trending