Connect with us

Published

on

If it’s effective, the law could become a model for other blue states seeking to protect digital information related to abortion access.

Continue Reading

Technology

Neuralink’s first in-human brain implant has experienced a problem, company says

Published

on

By

Neuralink's first in-human brain implant has experienced a problem, company says

Jonathan Raa | Nurphoto | Getty Images

Elon Musk‘s startup Neuralink on Wednesday said part of its brain implant malfunctioned after it put the system in a human patient for the first time.

Neuralink has built a brain-computer interface, or a BCI, that could eventually help patients with paralysis control external technology using only their minds. The company’s system, called the Link, records neural signals using 1,024 electrodes across 64 “threads” that are thinner than a human hair, according to its website.

In January, Neuralink implanted the device in a 29-year-old patient named Noland Arbaugh as part of a study to test its safety. The company streamed a live video with Arbaugh as he used the BCI in March, and Neuralink said in an April blog post that the surgery went “extremely well.”

But in the weeks afterward, a number of threads have retracted from Arbaugh’s brain, Neuralink said in a blog post Wednesday. This meant there were fewer effective electrodes, which inhibited the company’s ability to measure the Link’s speed and accuracy.

Neuralink did not disclose how many threads retracted from the tissue. The company did not immediately respond to CNBC’s request for comment.

As a workaround, Neuralink said it modified the recording algorithm, enhanced the user interface and worked to improve techniques for translating signals into cursor movements, the blog post said. Neuralink reportedly considered removing the implant, but the problem hasn’t posed a direct risk to Arbaugh’s safety, according to The Wall Street Journal, who earlier reported on the problem. Neuralink shared its blog post after the Journal asked the company about the issue, according to the report.

Though some threads retracted from Arbaugh’s brain tissue, Neuralink said he is using the company’s BCI system for around eight hours a day during the week, and often as many as 10 hours a day on the weekends.

Arbaugh said the Link is like a “luxury overload,” and it has helped him to “reconnect with the world,” according to the blog post.

Neuralink is not the only company that is building a BCI system, and the technology has been explored in academic settings for decades.

Neuralink has a long road of safety and efficacy testing ahead before it can be eligible for approval from the U.S. Food and Drug Administration to commercialize the technology.

Don’t miss these exclusives from CNBC PRO

Continue Reading

Technology

Alibaba rolls out latest version of its large language model to meet robust AI demand

Published

on

By

Alibaba rolls out latest version of its large language model to meet robust AI demand

The logo of the Alibaba office building is seen in the Huangpu District in Shanghai, June 16, 2023.

Costfoto | Nurphoto | Getty Images

Alibaba Cloud said on Thursday it released the latest version of its large language model after more than 90,000 deployments by companies.

Jingren Zhou, chief technology officer of Alibaba Cloud, said in a statement the firm has seen “many creative applications of the models from across the industries,” including consumer electronics and gaming.

“We look forward to collaborating with our customers and developers in seizing the immense
growth opportunities presented by the latest surge in the generative AI development,” said Zhou.

Alibaba Cloud said the latest version of its Tongyi Qianwen model, Qwen2.5, possesses “remarkable advancements in reasoning, code comprehension, and textual understanding compared to its predecessor Qwen2.0.”

Large language models power artificial intelligence applications like OpenAI’s ChatGPT. They are trained on vast amounts of data to generate humanlike responses to user prompts.

The latest Qwen model fares better than OpenAI’s GPT-4 model in language and creation capabilities, but fell short in other categories like knowledge, reasoning and math, according to a March analysis by large language model evaluation platform OpenCompass.

Meta takes aim at ChatGPT

Alibaba released Tongyi Qianwen in April 2023 after ChatGPT took the world by storm following its November 2022 launch. An upgraded version was released in October with improved capabilities in understanding complex instructions, copywriting, reasoning, memorizing, among others.

Alibaba Cloud said more than 2.2 million corporate users have accessed Qwen-powered AI services such as DingTalk – Alibaba’s answer to Slack.

The firm also said it has launched a series of new Qwen models to the open-source community and upgraded Model Studio, its generative AI platform, with new AI development resources.

Other Chinese tech giants such as Baidu and Tencent have released similar chatbots and AI models amid explosive demand for generative AI. Baidu in April said its Ernie bot has exceeded 200 million users after obtaining the green light for public use in August.

Generative AI is accelerating development of humanoid robots in China where these robots could help with factory work or labor.

Continue Reading

Technology

Generative AI’s disinformation threat is ‘overblown,’ top cyber expert says

Published

on

By

Generative AI's disinformation threat is 'overblown,' top cyber expert says

2024 is set up to be the biggest global election year in history. It coincides with the rapid rise in deepfakes. In APAC alone, there was a surge in deepfakes by 1530% from 2022 to 2023, according to a Sumsub report.

Fotografielink | Istock | Getty Images

Cybersecurity experts fear artificial intelligence-generated content has the potential to distort our perception of reality — a concern that is more troubling in a year filled with critical elections.

But one top expert is going against the grain, suggesting instead that the threat deep fakes pose to democracy may be “overblown.”

Martin Lee, technical lead for Cisco’s Talos security intelligence and research group, told CNBC he thinks that deepfakes — though a powerful technology in their own right — aren’t as impactful as fake news is.

However, new generative AI tools do “threaten to make the generation of fake content easier,” he added.

AI-generated material can often contain commonly identifiable indicators to suggest that it’s not been produced by a real person.

Visual content, in particular, has proven vulnerable to flaws. For example, AI-generated images can contain visual anomalies, such as a person with more than two hands, or a limb that’s merged into the background of the image.

It can be tougher to decipher between synthetically-generated voice audio and voice clips of real people. But AI is still only as good as its training data, experts say.

“Nevertheless, machine generated content can often be detected as such when viewed objectively. In any case, it is unlikely that the generation of content is limiting attackers,” Lee said.

Experts have previously told CNBC that they expect AI-generated disinformation to be a key risk in upcoming elections around the world.

‘Limited usefulness’

Matt Calkins, CEO of enterprise tech firm Appian, which helps businesses make apps more easily with software tools, said AI has a “limited usefulness.”

A lot of today’s generative AI tools can be “boring,” he added. “Once it knows you, it can go from amazing to useful [but] it just can’t get across that line right now.”

“Once we’re willing to trust AI with knowledge of ourselves, it’s going to be truly incredible,” Calkins told CNBC in an interview this week.

That could make it a more effective — and dangerous — disinformation tool in future, Calkins warned, adding he’s unhappy with the progress being made on efforts to regulate the technology stateside.

It might take AI producing something egregiously “offensive” for U.S. lawmakers to act, he added. “Give us a year. Wait until AI offends us. And then maybe we’ll make the right decision,” Calkins said. “Democracies are reactive institutions,” he said.

No matter how advanced AI gets, though, Cisco’s Lee says there are some tried and tested ways to spot misinformation — whether it’s been made by a machine or a human.

“People need to know that these attacks are happening and mindful of the techniques that may be used. When encountering content that triggers our emotions, we should stop, pause, and ask ourselves if the information itself is even plausible, Lee suggested.

“Has it been published by a reputable source of media? Are other reputable media sources reporting the same thing?” he said. “If not, it’s probably a scam or disinformation campaign that should be ignored or reported.”

Continue Reading

Trending