British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.
Peter Nicholls | Getty Images News | Getty Images
The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.
The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.
It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.
Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.
Of particular concern is the potential for the technology to replace — or undermine — human intelligence.
Where it’s being held
The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.
Bletchley Park was a codebreaking facility during World War II.
Getty
It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.
It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.
What it seeks to address
The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.
The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.
It will look to address two key categories of risk when it comes to AI: misuse and loss of control.
Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.
Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.
Who’s going?
Major names in the technology and political world will be there.
U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.
A Chinese government delegation from the Ministry of Science and Technology
European Commission President Ursula von der Leyen
Who won’t be there?
Several leaders have opted not to attend the summit.
French President Emmanuel Macron.
Chesnot | Getty Images News | Getty Images
They include:
U.S. President Joe Biden
Canadian Prime Minister Justin Trudeau
French President Emmanuel Macron
German Chancellor Olaf Scholz
When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”
“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.
“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.”
Will it succeed?
The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.
Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.
In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.
“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.
Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.
He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.
A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.
International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.
China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.
Saul Loeb | Afp | Getty Images
Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.
Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.
The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.
Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.
“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.
“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”
Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.
Photo by Carl Court
“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.
“We should be concentrating on the real fires that are literally present threats.”
However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”
“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.
“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”
Every weekday, the CNBC Investing Club with Jim Cramer holds a “Morning Meeting” livestream at 10:20 a.m. ET. Here’s a recap of Friday’s key moments. 1. The S & P 500 turned higher Friday. The index opened lower after posting its worst one-day performance since Oct. 10. Still, Wall Street remains cautious of Big Tech’s heavy spending and stretched valuations. Jim Cramer reminded investors to stick with profitable companies — like Nvidia and Microsoft , both Club names, and Alphabet — rather than those that make promises they can’t back. While our trusted S & P Short Range Oscillator is not yet oversold, we’re eyeing some select buying opportunities among stocks that have pulled back. We’re preparing to free up more cash as we look to move on from Disney , where linear television networks have been weighing on profits. Jim said Disney is “in denial” about their challenges. 2. Shares of drugmaker Bristol Myers fell more than 3.5% on Friday after a phase 3 trial for one of its experimental drugs was halted due to a patient health issue. The drug in question was not Cobenfy — the schizophrenia treatment we’ve been bullish on for its potential use on Alzheimer’s. A big Cobenfy readout is due by the end of the year. It’s a make-or-break update for us as investors, given management’s consistent issues with execution. “It’s hard to have faith in management after a series of miscues,” said portfolio director Jeff Marks. We’ve been selling the stock, and as Jim said during Thursday’s November Monthly Meeting , if the shares resume their recent rise, we would look to trim further. 3. Looking ahead to next week, there are four Club names reporting earnings, starting with Home Depot on Tuesday before the opening bell. The near-term setup makes it challenging to maintain a positive stance due to the current elevated state of mortgage rates. At the same time, there’s a significant amount of pent-up demand in the housing sector, which should be beneficial for the home improvement retailer. Next up is TJX on Wednesday before the opening bell. The off-price retailer is a big under-promise, over-deliver story, as it tends to beat the high end of guidance. Nvidia also reports on Wednesday, but after the closing bell. There are a lot of bears on the stock right now, but Jim maintains his “own it, don’t trade it” stance. Finally, cybersecurity firm Palo Alto Networks reports on Wednesday, after the bell, and we’re interested in hearing how management plans to beef up its agent-based security. 4. Stocks covered in Friday’s rapid fire at the end of the video were: Applied Materials , Walmart , Gap , and Nucor . (Jim Cramer’s Charitable Trust is long DIS, BMY, HD, TJX, NVDA, PANW. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
An exterior view of the new JPMorgan Chase global headquarters building at 270 Park Avenue on Nov. 13, 2025 in New York City.
Angela Weiss | AFP | Getty Images
JPMorgan Chase has secured deals ensuring it will get paid by the fintech firms responsible for nearly all the data requests made by third-party apps connected to customer bank accounts, CNBC has learned.
The bank has signed updated contracts with fintech middlemen that make up more than 95% of the data pulls on its systems, including Plaid, Yodlee, Morningstar and Akoya, according to JPMorgan spokesman Drew Pusateri.
“We’ve come to agreements that will make the open banking ecosystem safer and more sustainable and allow customers to continue reliably and securely accessing their favorite financial products,” Pusateri said in a statement. “The free market worked.”
The milestone is the latest twist in a long-running dispute between traditional banks and the fintech industry over access to customer accounts. For years, middlemen like Plaid paid nothing to tap bank systems when a customer wanted to use a fintech app like Robinhood to draw funds or check balances.
That dynamic appeared to be enshrined in law in late 2024 when the Biden-era Consumer Financial Protection Bureau finalized what is known as the “open-banking rule” requiring banks to share customer data with other financial firms at no cost.
But banks sued to prevent the CFPB rule from taking hold and seemed to gain the upper hand in May after the Trump administration asked a federal court to vacate the rule.
Soon after, JPMorgan — the largest U.S. bank by assets, deposits and branches — reportedly told the middlemen that it would start charging what amounts to hundreds of millions of dollars for access to its customer data.
In response, fintech, crypto and venture capital executives argued that the bank was engaging in “anti-competitive, rent-seeking behavior” that would hurt innovation and consumers’ ability to use popular apps.
After weeks of negotiations between JPMorgan and the middlemen, the bank agreed to lower pricing than it originally proposed, while the fintech middlemen won concessions regarding the servicing of data requests, according to people with knowledge of the talks.
Fintech firms preferred the certainty of locking in data-sharing rates because it is unclear whether the current CFPB, which is in the process of revising the open-banking rule, will favor banks or fintechs, according to a venture capital investor who asked for anonymity to discuss his portfolio companies.
The bank and the fintech firms declined to disclose details about their contracts, including how much the middlemen agreed to pay and how long the deals were in force.
Wider impact
The deals mark a shift in the power dynamic between banks, middlemen and the fintech apps that are increasingly threatening incumbents. More banks are likely to begin charging fintechs for access to their systems, according to industry observers.
“JPMorgan tends to be a trendsetter. They’re sort of the leader of the pack, so it’s fair to expect that the rest of the major banks will follow,” said Brian Shearer, director of competition and regulatory policy at the Vanderbilt Policy Accelerator.
Shearer, who worked at the CFPB under former director Rohit Chopra, said he was worried that the development would create a barrier of entry to nascent startups and ultimately result in higher costs for consumers.
Source: Robinhood
Proponents of the 2024 CFPB rule said it gave consumers control over their financial data and encouraged competition and innovation. Banks including JPMorgan said it exposed them to fraud and unfairly saddled them with the rising costs of maintaining systems increasingly tapped by the middlemen and their clients.
When Plaid’s deal with JPMorgan was announced in September, the companies issued a dual press release emphasizing the continuity it provided for customers.
But the industry group that Plaid is a part of has harshly criticized the development, signaling that while JPMorgan has won a decisive battle, the ongoing skirmish may yet play out in courts and in the public.
“Introducing prohibitive tolls is anti-competitive, anti-innovation, and flies in the face of the plain reading of the law,” said Penny Lee, CEO of the Financial Technology Association, told CNBC in response to the JPMorgan milestone.
“These agreements are not the free market at work, but rather big banks using their market position to capitalize on regulatory uncertainty,” Lee said. “We urge the Trump Administration to uphold the law by maintaining the existing prohibition on data access fees.”
Govini has fired Eric Gillespie from its board of directors after the founder was charged with attempting to solicit sexual contact with a minor online.
“The actions of one depraved individual should not in any way diminish the hard work of the broader team and their commitment to the security of the United States of America,” the defense software startup said in a release late Wednesday.
The company said the 57-year-old had no access to classified information since stepping down as CEO nearly ten years ago.
On Monday, the Pennsylvania Attorney General’s Office charged Gillespie with four felonies, including multiple counts of unlawful contact with a preteen.
A judge denied bail for Gillespie, who lived in Pittsburgh, citing flight risk and public safety concerns.
At the time, the Pentagon officials told CNBC that they were investigating the arrest and possible security risks.
Read more CNBC tech news
Last month, the Arlington, Virginia-based startup surpassed $100 million in annual recurring revenue and announced a $150 million growth investment from Bain Capital.