Connect with us

Published

on

British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.

Peter Nicholls | Getty Images News | Getty Images

The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.

The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.

It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.

Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.

Of particular concern is the potential for the technology to replace — or undermine — human intelligence.

Where it’s being held

The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.

Bletchley Park was a codebreaking facility during World War II.

Getty

It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.

It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.

What it seeks to address

The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.

The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.

It will look to address two key categories of risk when it comes to AI: misuse and loss of control.

Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.

Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.

Who’s going?

Major names in the technology and political world will be there.

U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.

Saul Loeb | AFP | Getty Images

They include:

Who won’t be there?

Several leaders have opted not to attend the summit.

French President Emmanuel Macron.

Chesnot | Getty Images News | Getty Images

They include:

  • U.S. President Joe Biden
  • Canadian Prime Minister Justin Trudeau
  • French President Emmanuel Macron
  • German Chancellor Olaf Scholz

When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”

“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.

“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.” 

Will it succeed?

The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.

Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.

In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.

“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.

Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.

He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.

A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.

International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.

China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.

Saul Loeb | Afp | Getty Images

Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.

Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.

The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.

Stateside, Biden on Monday issued an executive order on artificial intelligence, the first of its kind from the U.S. government, calling for safety assessments, equity and civil rights guidance, and research into AI’s impact on the labor market.

Shortcomings of the summit

Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.

“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.

“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”

Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.

Photo by Carl Court

“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.

“We should be concentrating on the real fires that are literally present threats.”

However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”

“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.

“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”

Continue Reading

Technology

Scarlett Johansson says OpenAI ripped off her voice after she told the company it could not use her for voice software

Published

on

By

Scarlett Johansson says OpenAI ripped off her voice after she told the company it could not use her for voice software

Actress Scarlett Johansson says OpenAI CEO Sam Altman used a voice similar to hers for its artificial intelligence voice software despite her decline of his invitation.

The response comes after OpenAI said it would pull its ChatGPT AI voice dubbed “Sky”, which launched last week and created controversy for sounding like Johansson’s voice in the movie “Her.”

Johansson said Altman approached her last September and then again two days before it announced ChatGPT-4o on May 13. Johansson played the voice character in the film “Her” about a man who forms a relationship with a virtual artificial intelligence named Samantha.

“After much consideration and for personal reasons, I declined the offer,” Johansson said in a statement to CNBC. “Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.”

Altman tweeted the message “her” on the day OpenAI announced its new AI.

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson’s statement continued. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word “her”

The actress wrote Monday that she has hired legal counsel. Johansson has sparred with large companies like Disney in the past. In 2021, Johansson and Walt Disney settled the breach of contract lawsuit the “Black Widow” actor brought against the studio.

“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” Microsoft-backed OpenAI posted on X on Monday.

“Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote in a blog post on Sunday. “To protect their privacy, we cannot share the names of our voice talents.”

Johansson said she wrote two letters to Altman and OpenAI, asking them to detail the process of creating Sky.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” her statement says. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

OpenAI didn’t immediately respond to a request for comment on Johansson’s statement.

Here is Scarlett Johansson’s full statement:

“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people. 

After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named “Sky” sounded like me.

When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word “her” – a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.

Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there

As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the “Sky” voice. Consequently, OpenAI reluctantly agreed to take down the “Sky” voice.

In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

Continue Reading

Technology

Microsoft announces new PCs with AI chips from Qualcomm

Published

on

By

Microsoft announces new PCs with AI chips from Qualcomm

Microsoft Chairman and Chief Executive Officer Satya Nadella speaks during the Microsoft May 20 Briefing event at Microsoft in Redmond, Washington, on May 20, 2024. 

Jason Redmond | AFP | Getty Images

Microsoft is touting new computers with advanced chips designed to run artificial intelligence features of software for Windows, without quickly using up battery life.

The company on Monday announced a Surface Laptop and a Surface Pro tablet with a Qualcomm chip that can run some AI tasks without an internet connection. Other computer makers like Lenovo, Dell, HP, Asus, Acer and Samsung are also launching AI-ready PCs powered by Qualcomm’s Snapdragon X Elite and X Plus processors, which promise longer battery life and will run Microsoft’s Copilot AI chatbot.

Device makers will release PCs with AMD and Intel chips that will adhere to the Copilot+ standard at a later time, Microsoft said during a press keynote address on its campus in Redmond, Washington. The PCs will be able to translate audio, recommend responses to incoming messages and suggest changes in the Settings app, and even talk with people about what’s on screen.

Copilot+ PCs will start at $999. Microsoft is accepting pre-orders as of Monday, and the devices will become available in June.

A Recall feature will be able to search through a log of previous actions on PCs. Recall relies on AI models that run directly on the device, so it can run offline, and an index of the data never goes to remote servers. AI models will be able to generate images based on written descriptions as well as drawings.

Microsoft is banking on Qualcomm’s energy-efficient Arm-based chips that can handle AI models to defend its Windows franchise. Apple has gained market share in PC shipments with MacBooks containing its Arm-based chips, having moved away from Intel, the top provider of computer processors.

Microsoft is expanding its effort to surround consumers and business users with ChatGPT-like capabilities. OpenAI, backed by Microsoft, released the ChatGPT chatbot in late 2022, and it took off as a tool for quickly obtaining computer-generated poems, email drafts and summaries of historical events.

Other large technology companies, including Microsoft, soon started augmenting their products with generative AI. A Copilot chatbot drawing on ChatGPT’s underlying AI models came to the Bing search engine, along with the Windows 10 and 11 operating systems. Those with Office productivity software subscriptions could pay extra to have a Copilot refer to their documents for written responses.

The GPT-4 model inside ChatGPT has only done its necessary computing work in Microsoft’s Azure cloud. The new PCs can run some AI models locally without an internet connection.

The launch comes nearly four months after Microsoft CEO Satya Nadella told analysts on the company’s earnings call that “in 2024, AI will become a first-class part of every PC.”

Microsoft has had little success in getting people to adopt Arm-based Windows computers, which haven’t always performed as well as PCs running Intel or AMD chips. Certain applications have been incompatible.

Running generative AI locally means computers will need more power, and strong battery life becomes more critical. That might make Windows on Arm more compelling.

Analysts with Morgan Stanley expect Arm systems to be 14% of all Windows PC shipments in 2026, up from 0% in 2023, according to a note distributed to clients earlier this month.

Microsoft shares closed up 1.2% Monday afternoon to $425.34, just shy of a record reached in March. Qualcomm rose 2% to $197.76 for a record close.

Don’t miss these exclusives from CNBC PRO

AI PC race speeds up ahead of significant chip launches

Continue Reading

Technology

With JPMorgan, Mastercard on board in biometric ‘breakthrough’ year, you may soon start paying with your face

Published

on

By

With JPMorgan, Mastercard on board in biometric 'breakthrough' year, you may soon start paying with your face

Automated fast food restaurant CaliExpress by Flippy, in Pasadena, Calif., opened in January to considerable hype due to its robot burger makers, but the restaurant launched with another, less heralded innovation: the ability to pay for your meal with your face.

CaliExpress uses a payment system from facial ID tech company PopID. To activate it, users register with a selfie. Then they can opt to be recognized and then PopID’s facial verification confirms the transaction.

It’s not the only fast-food chain to employ the technology. In January, Steak ‘N Shake, a fast-casual restaurant in the Midwest, started installing facial recognition kiosks in its 300 locations for patron check-in. The chain says that using PopID takes two to three seconds compared with a check-in with a QR code or mobile app, which can take up to 20 seconds.

Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID,  launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success — 76% of pilot participants said they would recommend the technology to a friend. Late last year, Mastercard said it was teaming with NEC to bring its Biometric Checkout Program to the Asia-Pacific region.

“Our focus on biometrics as a secure way to verify identity, replacing the password with the person, is at the heart of our efforts in this area,” said Dennis Gamiello, executive vice president of identity products and innovation at Mastercard. He added that based on positive feedback from the pilot and its research, the checkout technology will come to more new markets later this year.

As stores implement biometric technology for a variety of purposes, from payments to broader anti-theft systems, consumer blowback, and lawsuits, are rising. In March, an Illinois woman sued retailer Target for allegedly illegally collecting and storing her and other customers’ biometric data via facial recognition technology without their consent. Amazon and T-Mobile are also facing legal actions related to biometric technology.

In other countries, most notably China, biometric payment systems are comparatively mature, from visitors to McDonald’s in China being able to use facial recognition technology to pay for their orders, to systems offered by AliPay, which launched biometric payment as far back as 2015 and began testing the technology at KFC locations in China in 2018.

A deal that PopID recently signed with JPMorgan is a sign of things to come in the U.S., said John Miller, PopID CEO, and what he thinks will be a “breakthrough” year for pay-by-face technology.

The consumer case is tied to the growing importance of loyalty programs. Most quick-service restaurants require consumers to provide their loyalty information to earn rewards — which means pulling out a phone, opening an app, finding the link to the loyalty QR code, and then presenting the QR code to the cashier or reader. For payment, consumers are typically choosing between pulling out their wallet, selecting a credit card, and then dipping or tapping the card or pulling out their phone, opening it with Face ID, and then presenting it to the reader. Miller says PopID simplifies this process by requiring just tapping an on-screen button, and then looking briefly at a camera for both loyalty check-in and payment.

“We believe our partnership with JPMorgan is a watershed moment for biometric payments as it represents the first time a leading merchant acquirer has agreed to push biometric payments to its merchant customers,” Miller said. “JPMorgan brings the kind of credibility and assurance that both merchants and consumers need to adopt biometric payments.”

Consumers are getting more comfortable with biometric technology. The majority still prefer fingerprint scans to facial recognition, according to a 2023 survey from PYMENTS, but age is a factor. Gen Z consumers are more open to facial recognition than to fingerprint scans or entering a password.

Juniper Research forecasts over 100% market growth for global biometric payments between 2024 and 2028, and by 2025, $3 trillion in mobile, biometric-secured payments.

To be sure, security concerns and the hacking of biometric data as a consequence of sharing it, will remain important to the evolving usage and conversation.

Sheldon Jacobson, a professor in computer science at the University of Illinois, Urbana-Champaign, said he sees biometric identification as part of a technology continuum that has evolved from payment with a credit card to smartphones. “The next natural step is to simply use facial recognition,” he said.

Concerns about privacy and facial recognition, he says, are overblown. “We voluntarily give up our privacy all the time,” Jacobson said. “We post on Facebook, we use social media and we are basically giving up our privacy. I tell people constantly that everything about you is already out there.” 

Continue Reading

Trending