British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.
Peter Nicholls | Getty Images News | Getty Images
The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.
The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.
It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.
Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.
Of particular concern is the potential for the technology to replace — or undermine — human intelligence.
Where it’s being held
The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.
Bletchley Park was a codebreaking facility during World War II.
Getty
It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.
It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.
What it seeks to address
The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.
The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.
It will look to address two key categories of risk when it comes to AI: misuse and loss of control.
Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.
Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.
Who’s going?
Major names in the technology and political world will be there.
U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.
A Chinese government delegation from the Ministry of Science and Technology
European Commission President Ursula von der Leyen
Who won’t be there?
Several leaders have opted not to attend the summit.
French President Emmanuel Macron.
Chesnot | Getty Images News | Getty Images
They include:
U.S. President Joe Biden
Canadian Prime Minister Justin Trudeau
French President Emmanuel Macron
German Chancellor Olaf Scholz
When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”
“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.
“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.”
Will it succeed?
The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.
Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.
In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.
“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.
Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.
He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.
A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.
International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.
China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.
Saul Loeb | Afp | Getty Images
Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.
Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.
The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.
Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.
“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.
“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”
Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.
Photo by Carl Court
“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.
“We should be concentrating on the real fires that are literally present threats.”
However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”
“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.
“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”
TikTok’s grip on the short-form video market is tightening, and the world’s biggest tech platforms are racing to catch up.
Since launching globally in 2016, ByteDance-owned TikTok has amassed over 1.12 billion monthly active users worldwide, according to Backlinko. American users spend an average of 108 minutes per day on the app, according to Apptoptia.
TikTok’s success has reshaped the social media landscape, forcing competitors like Meta and Google to pivot their strategies around short-form video. But so far, experts say that none have matched TikTok’s algorithmic precision.
“It is the center of the internet for young people,” said Jasmine Enberg, vice president and principal analyst at Emarketer. “It’s where they go for entertainment, news, trends, even shopping. TikTok sets the tone for everyone else.”
Platforms like Meta‘s Instagram Reels and Google’s YouTube Shorts have expanded aggressively, launching new features, creator tools and even considering separate apps just to compete. Microsoft-owned LinkedIn, traditionally a professional networking site, is the latest to experiment with TikTok-style feeds. But with TikTok continuing to evolve, adding features like e-commerce integrations and longer videos, the question remains whether rivals can keep up.
“I’m scrolling every single day. I doom scroll all the time,” said TikTok content creator Alyssa McKay.
But there may a dark side to this growth.
As short-form content consumption soars, experts warn about shrinking attention spans and rising mental-health concerns, particularly among younger users. Researchers like Dr. Yann Poncin, associate professor at the Child Study Center at Yale University, point to disrupted sleep patterns and increased anxiety levels tied to endless scrolling habits.
“Infinite scrolling and short-form video are designed to capture your attention in short bursts,” Dr. Poncin said. “In the past, entertainment was about taking you on a journey through a show or story. Now, it’s about locking you in for just a few seconds, just enough to feed you the next thing the algorithm knows you’ll like.”
Despite sky-high engagement, monetizing short videos remains an uphill battle. Unlike long-form YouTube content, where ads can be inserted throughout, short clips offer limited space for advertisers. Creators, too, are feeling the squeeze.
“It’s never been easier to go viral,” said Enberg. “But it’s never been harder to turn that virality into a sustainable business.”
Last year, TikTok generated an estimated $23.6 billion in ad revenues, according to Oberlo, but even with this growth, many creators still make just a few dollars per million views. YouTube Shorts pays roughly four cents per 1,000 views, which is less than its long-form counterpart. Meanwhile, Instagram has leaned into brand partnerships and emerging tools like “Trial Reels,” which allow creators to experiment with content by initially sharing videos only with non-followers, giving them a low-risk way to test new formats or ideas before deciding whether to share with their full audience. But Meta told CNBC that monetizing Reels remains a work in progress.
While lawmakers scrutinize TikTok’s Chinese ownership and explore potential bans, competitors see a window of opportunity. Meta and YouTube are poised to capture up to 50% of reallocated ad dollars if TikTok faces restrictions in the U.S., according to eMarketer.
Watch the video to understand how TikTok’s rise sparked a short form video race.
The X logo appears on a phone, and the xAI logo is displayed on a laptop in Krakow, Poland, on April 1, 2025. (Photo by Klaudia Radecka/NurPhoto via Getty Images)
Nurphoto | Nurphoto | Getty Images
Elon Musk‘s xAI Holdings is in discussions with investors to raise about $20 billion, Bloomberg News reported Friday, citing people familiar with the matter.
The funding would value the company at over $120 billion, according to the report.
Musk was looking to assign “proper value” to xAI, sources told CNBC’s David Faber earlier this month. The remarks were made during a call with xAI investors, sources familiar with the matter told Faber. The Tesla CEO at that time didn’t explicitly mention any upcoming funding round, but the sources suggested xAI was preparing for a substantial capital raise in the near future.
The funding amount could be more than $20 billion as the exact figure had not been decided, the Bloomberg report added.
Artificial intelligence startup xAI didn’t immediately respond to a CNBC request for comment outside of U.S. business hours.
The AI firm last month acquired X in an all-stock deal that valued xAI at $80 billion and the social media platform at $33 billion.
“xAI and X’s futures are intertwined. Today, we officially take the step to combine the data, models, compute, distribution and talent,” Musk said on X, announcing the deal. “This combination will unlock immense potential by blending xAI’s advanced AI capability and expertise with X’s massive reach.”
Alphabet CEO Sundar Pichai during the Google I/O developers conference in Mountain View, California, on May 10, 2023.
David Paul Morris | Bloomberg | Getty Images
Alphabet‘s stock gained 3% Friday after signaling strong growth in its search and advertising businesses amid a competitive artificial intelligence environment and uncertain macro backdrop.
“GOOGL‘s pace of GenAI product roll-out is accelerating with multiple encouraging signals,” wrote Morgan Stanley‘s Brian Nowak. “Macro uncertainty still exists but we remain [overweight] given GOOGL’s still strong relative position and improving pace of GenAI enabled product roll-out.”
The search giant posted earnings of $2.81 per share on $90.23 billion in revenues. That topped the $89.12 billion in sales and $2.01 in EPS expected by LSEG analysts. Revenues grew 12% year-over-year and ahead of the 10% anticipated by Wall Street.
Net income rose 46% to $34.54 billion, or $2.81 per share. That’s up from $23.66 billion, or $1.89 per share, in the year-ago period. Alphabet said the figure included $8 billion in unrealized gains on its nonmarketable equity securities connected to its investment in a private company.
Adjusted earnings, excluding that gain, were $2.27 per share, according to LSEG, and topped analyst expectations.
Read more CNBC tech news
Alphabet shares have pulled back about 16% this year as it battles volatility spurred by mounting trade war fears and worries that President Donald Trump‘s tariffs could crush the global economy. That would make it more difficult for Alphabet to potentially acquire infrastructure for data centers powering AI models as it faces off against competitors such as OpenAI and Anthropic to develop largely language models.
During Thursday’s call with investors, Alphabet suggested that it’s too soon to tally the total impact of tariffs. However, Google’s business chief Philipp Schindler said that ending the de minimis trade exemption in May, which created a loophole benefitting many Chinese e-commerce retailers, could create a “slight headwind” for the company’s ads business, specifically in the Asia-Pacific region. The loophole allows shipments under $800 to come into the U.S. duty-free.
Despite this backdrop, Alphabet showed steady growth in its advertising and search business, reporting $66.89 billion in revenues for its advertising unit. That reflected 8.5% growth from the year-ago period. The company reported $8.93 billion in advertising revenue for its YouTube business, shy of an $8.97 billion estimate from StreetAccount.
Alphabet’s “Search and other” unit rose 9.8% to $50.7 billion, up from $46.16 billion last year. The company said that its AI Overviews tool used in its Google search results page has accumulated 1.5 billion monthly users from a billion in October.
Bank of America analyst Justin Post said that Wall Street is underestimating the upside potential and “monetization ramp” from this tool and cloud demand fueled by AI.
“The strong 1Q search performance, along with constructive comments on Gemini [large language model] performance and [AI Overviews] adoption could help alleviate some investor concerns on AI competition,” Post wrote in a note.