Connect with us

Published

on

British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.

Peter Nicholls | Getty Images News | Getty Images

The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.

The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.

It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.

Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.

Of particular concern is the potential for the technology to replace — or undermine — human intelligence.

Where it’s being held

The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.

Bletchley Park was a codebreaking facility during World War II.

Getty

It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.

It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.

What it seeks to address

The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.

The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.

It will look to address two key categories of risk when it comes to AI: misuse and loss of control.

Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.

Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.

Who’s going?

Major names in the technology and political world will be there.

U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.

Saul Loeb | AFP | Getty Images

They include:

Who won’t be there?

Several leaders have opted not to attend the summit.

French President Emmanuel Macron.

Chesnot | Getty Images News | Getty Images

They include:

  • U.S. President Joe Biden
  • Canadian Prime Minister Justin Trudeau
  • French President Emmanuel Macron
  • German Chancellor Olaf Scholz

When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”

“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.

“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.” 

Will it succeed?

The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.

Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.

In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.

“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.

Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.

He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.

A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.

International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.

China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.

Saul Loeb | Afp | Getty Images

Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.

Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.

The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.

Stateside, Biden on Monday issued an executive order on artificial intelligence, the first of its kind from the U.S. government, calling for safety assessments, equity and civil rights guidance, and research into AI’s impact on the labor market.

Shortcomings of the summit

Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.

“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.

“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”

Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.

Photo by Carl Court

“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.

“We should be concentrating on the real fires that are literally present threats.”

However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”

“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.

“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”

Continue Reading

Technology

We’re looking to further trim this drug stock and exit this entertainment giant

Published

on

By

We're looking to further trim this drug stock and exit this entertainment giant

Continue Reading

Technology

JPMorgan Chase wins fight with fintech firms over fees to access customer data

Published

on

By

JPMorgan Chase wins fight with fintech firms over fees to access customer data

An exterior view of the new JPMorgan Chase global headquarters building at 270 Park Avenue on Nov. 13, 2025 in New York City.

Angela Weiss | AFP | Getty Images

JPMorgan Chase has secured deals ensuring it will get paid by the fintech firms responsible for nearly all the data requests made by third-party apps connected to customer bank accounts, CNBC has learned.

The bank has signed updated contracts with fintech middlemen that make up more than 95% of the data pulls on its systems, including Plaid, Yodlee, Morningstar and Akoya, according to JPMorgan spokesman Drew Pusateri.

“We’ve come to agreements that will make the open banking ecosystem safer and more sustainable and allow customers to continue reliably and securely accessing their favorite financial products,” Pusateri said in a statement. “The free market worked.”

The milestone is the latest twist in a long-running dispute between traditional banks and the fintech industry over access to customer accounts. For years, middlemen like Plaid paid nothing to tap bank systems when a customer wanted to use a fintech app like Robinhood to draw funds or check balances.

That dynamic appeared to be enshrined in law in late 2024 when the Biden-era Consumer Financial Protection Bureau finalized what is known as the “open-banking rule” requiring banks to share customer data with other financial firms at no cost.

But banks sued to prevent the CFPB rule from taking hold and seemed to gain the upper hand in May after the Trump administration asked a federal court to vacate the rule.

Soon after, JPMorgan — the largest U.S. bank by assets, deposits and branches — reportedly told the middlemen that it would start charging what amounts to hundreds of millions of dollars for access to its customer data.

In response, fintech, crypto and venture capital executives argued that the bank was engaging in “anti-competitive, rent-seeking behavior” that would hurt innovation and consumers’ ability to use popular apps.

After weeks of negotiations between JPMorgan and the middlemen, the bank agreed to lower pricing than it originally proposed, while the fintech middlemen won concessions regarding the servicing of data requests, according to people with knowledge of the talks.

Fintech firms preferred the certainty of locking in data-sharing rates because it is unclear whether the current CFPB, which is in the process of revising the open-banking rule, will favor banks or fintechs, according to a venture capital investor who asked for anonymity to discuss his portfolio companies.

The bank and the fintech firms declined to disclose details about their contracts, including how much the middlemen agreed to pay and how long the deals were in force.

Wider impact

The deals mark a shift in the power dynamic between banks, middlemen and the fintech apps that are increasingly threatening incumbents. More banks are likely to begin charging fintechs for access to their systems, according to industry observers.  

“JPMorgan tends to be a trendsetter. They’re sort of the leader of the pack, so it’s fair to expect that the rest of the major banks will follow,” said Brian Shearer, director of competition and regulatory policy at the Vanderbilt Policy Accelerator.

Shearer, who worked at the CFPB under former director Rohit Chopra, said he was worried that the development would create a barrier of entry to nascent startups and ultimately result in higher costs for consumers.

Source: Robinhood

Proponents of the 2024 CFPB rule said it gave consumers control over their financial data and encouraged competition and innovation. Banks including JPMorgan said it exposed them to fraud and unfairly saddled them with the rising costs of maintaining systems increasingly tapped by the middlemen and their clients.  

When Plaid’s deal with JPMorgan was announced in September, the companies issued a dual press release emphasizing the continuity it provided for customers.

But the industry group that Plaid is a part of has harshly criticized the development, signaling that while JPMorgan has won a decisive battle, the ongoing skirmish may yet play out in courts and in the public.

“Introducing prohibitive tolls is anti-competitive, anti-innovation, and flies in the face of the plain reading of the law,” said Penny Lee, CEO of the Financial Technology Association, told CNBC in response to the JPMorgan milestone.

These agreements are not the free market at work, but rather big banks using their market position to capitalize on regulatory uncertainty,” Lee said. “We urge the Trump Administration to uphold the law by maintaining the existing prohibition on data access fees.”

Continue Reading

Technology

Founder Eric Gillespie fired from Govini board after child sex solicitation arrest

Published

on

By

Founder Eric Gillespie fired from Govini board after child sex solicitation arrest

Anton Petrus | Moment | Getty Images

Govini has fired Eric Gillespie from its board of directors after the founder was charged with attempting to solicit sexual contact with a minor online.

“The actions of one depraved individual should not in any way diminish the hard work of the broader team and their commitment to the security of the United States of America,” the defense software startup said in a release late Wednesday.

The company said the 57-year-old had no access to classified information since stepping down as CEO nearly ten years ago.

On Monday, the Pennsylvania Attorney General’s Office charged Gillespie with four felonies, including multiple counts of unlawful contact with a preteen.

A judge denied bail for Gillespie, who lived in Pittsburgh, citing flight risk and public safety concerns.

At the time, the Pentagon officials told CNBC that they were investigating the arrest and possible security risks.

Read more CNBC tech news

Last month, the Arlington, Virginia-based startup surpassed $100 million in annual recurring revenue and announced a $150 million growth investment from Bain Capital.

Govini has a more than $900-million contract with the U.S. government and deals with the Department of War.

Gillespie, who is viewed as an expert in government transparency, was named to the Freedom of Information Act Advisory Committee during the Obama administration in 2014.

He previously worked as an executive at business intelligence platform Onvia.

He is a graduate of Miami University and Harvard Business School.

Continue Reading

Trending