British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.
Peter Nicholls | Getty Images News | Getty Images
The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.
The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.
It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.
Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.
Of particular concern is the potential for the technology to replace — or undermine — human intelligence.
Where it’s being held
The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.
Bletchley Park was a codebreaking facility during World War II.
Getty
It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.
It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.
What it seeks to address
The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.
The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.
It will look to address two key categories of risk when it comes to AI: misuse and loss of control.
Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.
Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.
Who’s going?
Major names in the technology and political world will be there.
U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.
A Chinese government delegation from the Ministry of Science and Technology
European Commission President Ursula von der Leyen
Who won’t be there?
Several leaders have opted not to attend the summit.
French President Emmanuel Macron.
Chesnot | Getty Images News | Getty Images
They include:
U.S. President Joe Biden
Canadian Prime Minister Justin Trudeau
French President Emmanuel Macron
German Chancellor Olaf Scholz
When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”
“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.
“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.”
Will it succeed?
The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.
Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.
In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.
“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.
Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.
He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.
A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.
International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.
China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.
Saul Loeb | Afp | Getty Images
Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.
Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.
The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.
Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.
“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.
“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”
Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.
Photo by Carl Court
“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.
“We should be concentrating on the real fires that are literally present threats.”
However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”
“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.
“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”
Colin Angle, co-founder and chief executive officer of iRobot Corp., speaks during a Prime Air delivery drone reveal event in Las Vegas, Nevada, U.S., on Wednesday, June 5, 2019.
Joe Buglewicz | Bloomberg | Getty Images
Colin Angle, co-founder and former CEO of iRobot, on Monday said the company’s move to declare bankruptcy was “profoundly disappointing” and “nothing short of a tragedy for consumers.”
The robotic vacuum pioneer announced Sunday that it filed for bankruptcy and will be taken private by Shenzhen Picea Robotics, a lender and key supplier, following years of financial struggles.
“Today’s outcome is profoundly disappointing — and it was avoidable,” Angle told CNBC in a statement. “This is nothing short of a tragedy for consumers, the robotics industry, and America’s innovation economy.”
In a Sunday court filing, iRobot said it had between $100 million and $500 million of assets and liabilities. The company said it owes almost $100 million to its new owner Picea, more than $5.8 million to GXO Logistics and roughly $3.4 million to U.S. Customs and Border Protection for unpaid tariffs, among other liabilities.
Shares of iRobot plunged more than 72% on Monday.
Founded in 1990 by Angle and two other researchers at the Massachusetts Institute of Technology, iRobot got its start making military and defense tech for the government before launching its flagship Roomba product in 2002 that cemented it as an early leader in the vacuum cleaner market.
The company’s future has remained uncertain after Amazon abandoned its planned $1.7 billion acquisition of the company in January 2024, citing regulatory scrutiny from the European Union and the U.S. Federal Trade Commission. Afterward, iRobot laid off 31% of staff and Angle announced he would step down as CEO and board chair.
Amazon CEO Andy Jassy called regulators’ efforts to block the deal a “sad story” and said it would’ve given iRobot a competitive boost against rivals.
The Amazon acquisition was “the most viable path” for iRobot to compete globally, Angle said Monday. He added that iRobot’s bankruptcy serves as a “warning” for competition watchdogs.
Helen Greiner, one of iRobot’s cofounders, said in a Monday LinkedIn post that the company’s restructuring plan under a Chinese owner isn’t good for “consumers, employees, stockholders, Massachusetts or the USA.”
The company had been facing growing competition from cheaper, rapidly growing rivals, such as China-based Anker, Ecovacs and Roborock. Supply chain constraints in recent years added further strain to iRobot’s business, as it struggled to navigate shipping and inventory delays, which dented its revenue.
Its financial outlook darkened significantly after the Amazon deal fell apart, and in October, iRobot said it would be forced to seek bankruptcy protection if it failed to secure more capital or find a buyer.
Gary Cohen, iRobot CEO, said in a statement Monday that the restructuring plan would help secure the company’s “long-term future.” The bankruptcy proceedings aren’t expected to disrupt its products’ functionality or customer support, iRobot said.
The company’s third-quarter sales came in at $145.8 million, down almost 25% from $193.4 million one year earlier, and iRobot has about $190 million in debt.
In at least one corner of the artificial intelligence market, sentiment has turned decidedly negative.
Broadcom, CoreWeave and Oracle, three companies intimately tied to the AI infrastructure buildout, all had another rough day on Wall Street on Monday after selling off sharply last week.
While the three stocks are all still solidly up for the year — CoreWeave held its market debut in March — the most recent trend suggests that investors are concerned about whether the returns on investment will ever justify the level of spending taking place.
“It definitely requires the ROI to be there to keep funding this AI investment,” Matt Witheiler, head of late-stage growth at Wellington Management, told CNBC’s “Money Movers” on Monday. “From what we’ve seen so far that ROI is there.”
Witheiler said the bullish side of the story is that, “every single AI company on the planet is saying if you give me more compute I can make more revenue.”
Still, the market was displeased last week with quarterly earnings reports from chipmaker Broadcom and cloud infrastructure supplier Oracle, even though both companies beat on revenue and issued forecasts showing that AI demand is soaring.
Oracle, which is now heavily reliant on the debt markets to fund its data center development, provided scant details about how it will continue to finance its commitments. The company said it would ramp up capital expenditures in the current fiscal year to $50 billion from an earlier forecast of $35 billion because of new contracts from the likes of Meta and Nvidia.
It’s also ratcheting up leases. As of Nov. 30, Oracle had $248 billion in lease commitments for data centers and cloud capacity commitments that will run for 15 to 19 years. That’s up 148% from the end of August.
Meanwhile, Broadcom CEO Hock Tan said he expects AI chip sales this quarter to double from a year earlier to $8.2 billion, driven by both custom chips as well as semiconductors for AI networking.
However, as the company spends heavily on more parts to produce server racks, investors are going to have to stomach a hit to profits. CFO Kirsten Spears said on Broadcom’s earnings call that “gross margins will be lower” for some of the company’s AI chip systems.
Broadcom shares fell about 5% on Monday following an 11% slump on Friday, leaving them 17% below their record high reached on Wednesday.
Oracle dropped about 2.5% on Monday and is now down 17% in the past three trading days. The company has lost 46% of its value since Sept. 10, when the stock had its best day since 1992 following disclosure of a massive AI backlog.
Venture capitalist Tomasz Tunguz, who focuses on enterprise software and AI, wrote in a Monday blog that Oracle’s recent fundraising binge has left it with a debt-to-equity ratio of 500%, “dwarfing its cloud computing peers.” Amazon, Microsoft, Meta and Google all have ratios between 7% and 23%, he wrote.
Tunguz, founder of Theory Ventures, said the other company with a notably high ratio, at 120%, is CoreWeave, which provides cloud computing services built largely around Nvidia’s graphics processing units.
CoreWeave shares fell about 6% on Monday after dropping 11% last week. The company has lost 60% of its value from its high in June.
Lee previously led corporate development for Google Cloud and Google DeepMind. He worked on several of Google’s high-profile acquisitions, including its $32 billion purchase of the cloud security startup Wiz, which the company announced in March.
In his new role, Lee will have broad visibility across OpenAI as the company focuses on strategic investments and M&A in its next phase of growth, the spokesperson said. His hiring signals that OpenAI will continue to hunt for targets that can help it gain an edge over rivals like Google and Anthropic.
OpenAI was founded as a nonprofit research lab in 2015, but its valuation has ballooned to $500 billion since the launch of ChatGPT in 2022.
The AI lab has made multiple acquisitions this year. Most recently, OpenAI earlier this month announced a definitive agreement to acquire Neptune, a startup that helps with AI model training. The companies did not disclose the terms.
OpenAI also bought a small company called Software Applications Incorporated for an undisclosed sum in October, the product development startup Statsig for $1.1 billion in September and former Apple designer Jony Ive’s AI devices startup io for more than $6 billion in May.
Lee is the latest of several executives to join OpenAI as the company looks to fill out its leadership bench.
Earlier this month, OpenAI announced Slack CEO Denise Dresser will serve as its chief revenue officer. In May, the company announced it hired Fidji Simo, who was then CEO of Instacart, as the head of the AI lab’s applications business.
The Information was first to report Lee’s departure from Google.