Connect with us

Published

on

British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.

Peter Nicholls | Getty Images News | Getty Images

The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.

The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.

It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.

Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.

Of particular concern is the potential for the technology to replace — or undermine — human intelligence.

Where it’s being held

The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.

Bletchley Park was a codebreaking facility during World War II.

Getty

It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.

It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.

What it seeks to address

The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.

The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.

It will look to address two key categories of risk when it comes to AI: misuse and loss of control.

Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.

Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.

Who’s going?

Major names in the technology and political world will be there.

U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.

Saul Loeb | AFP | Getty Images

They include:

Who won’t be there?

Several leaders have opted not to attend the summit.

French President Emmanuel Macron.

Chesnot | Getty Images News | Getty Images

They include:

  • U.S. President Joe Biden
  • Canadian Prime Minister Justin Trudeau
  • French President Emmanuel Macron
  • German Chancellor Olaf Scholz

When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”

“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.

“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.” 

Will it succeed?

The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.

Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.

In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.

“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.

Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.

He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.

A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.

International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.

China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.

Saul Loeb | Afp | Getty Images

Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.

Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.

The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.

Stateside, Biden on Monday issued an executive order on artificial intelligence, the first of its kind from the U.S. government, calling for safety assessments, equity and civil rights guidance, and research into AI’s impact on the labor market.

Shortcomings of the summit

Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.

“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.

“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”

Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.

Photo by Carl Court

“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.

“We should be concentrating on the real fires that are literally present threats.”

However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”

“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.

“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”

Continue Reading

Technology

Australia is trying to enforce the first teen social media ban. Governments worldwide are watching.

Published

on

By

Australia is trying to enforce the first teen social media ban. Governments worldwide are watching.

In this photo illustration, iPhone screens display various social media apps on the screens on February 9, 2025 in Bath, England.

Anna Barclay | Getty Images News | Getty Images

Australia on Wednesday became the first country to formally bar users under the age of 16 from accessing major social media platforms, a move expected to be closely monitored by global tech companies and policymakers around the world.

Canberra’s ban, which came into effect from midnight local time, targets 10 major services, including Alphabet‘s YouTube, Meta’s Instagram, ByteDance’s TikTok, Reddit, Snapchat and Elon Musk’s X.

The controversial rule requires these platforms to take “reasonable steps” to prevent underage access, using ageverification methods such as inference from online activity, facial estimation via selfies, uploaded IDs, or linked bank details.

All targeted platforms had agreed to comply with the policy to some extent. Elon Musk’s X had been one of the last holdouts, but signaled on Wednesday that it would comply. 

The policy means millions of Australian children are expected to have lost access to their social accounts. 

However, the impact of the policy could be even wider, as it will set a benchmark for other governments considering teen social media bans, including Denmark, Norway, France, Spain, Malaysia and New Zealand. 

Controversial rollout

Ahead of the legislation’s passage last year, a YouGov survey found that 77% of Australians backed the under-16 social media ban. Still, the rollout has faced some resistance since becoming law.

Supporters of the bill have argued it safeguards children from social media-linked harms, including cyberbullying, mental health issues, and exposure to predators and pornography. 

Among those welcoming the official ban on Wednesday was Jonathan Haidt, social psychologist and author of The Anxious Generation, a 2024 best-selling book that linked a growing mental health crisis to smartphone and social media usage, especially for the young.

Social media platforms have too much power and nothing is being done about it: Niall Ferguson

In a post on social media platform X, Haidt commended policymakers in Australia for “freeing kids under 16 from the social media trap.”

“There will surely be difficulties in the early months, but the world is rooting for your success, and many other nations will follow,” he added. 

On the other hand, opponents contend that the ban infringes on freedoms of expression and access to information, raises privacy concerns through invasive age verification, and represents excessive government intervention that undermines parental responsibility.

Those critics include groups like Amnesty Tech, which said in a statement Tuesday that the ban was an ineffective fix that ignored the rights and realities of younger generations.

“The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design,” said Amnesty Tech Programme Director Damini Satija.

Dr. Vivek Murthy: Social media is one of the key drivers of our youth mental health crisis today

Meanwhile, David Inserra, a fellow for free expression and technology at the Cato Institute, warned in a blog post that children would evade the new policy by shifting to new platforms, private apps like Telegram, or VPNs, driving them to “more isolated communities and platforms with fewer protections” where monitoring is harder.

Tech companies like Google have also warned that the policy could be extremely difficult to enforce, while government-commissioned reports have pointed to inaccuracies in ageverification technology, such as selfie-based ageguessing software. 

Indeed, on Wednesday, local reports in Australia indicated that many children had already bypassed the ban, with age-assurance tools misclassifying users, and workarounds such as VPNs proving effective.

However, Australian Prime Minister Anthony Albanese had attempted to preempt these issues, acknowledging in an opinion piece on Sunday that the system would not work flawlessly from the start, likening it to liquor laws.

“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear national standard,” he added.

Experts told CNBC that the rollout is expected to continue to face challenges and that regulators would need to take a trial-and-error approach. 

“There’s a fair amount of teething problems around it. Many young people have been posting on TikTok that they successfully evaded the age limitations and that’s to be expected,” said Terry Flew, a professor of digital communication and culture at the University of Sydney. 

“You were never going to get 100% disappearance of every person under the age of 16 from every one of the designated platforms on day one,” he added.

Global implications

Experts told CNBC that the policy rollout in Australia will be closely watched by tech firms and lawmakers worldwide, as other countries consider their own moves to ban or restrict teen social media usage. 

“Governments are responding to how public expectations have changed about the internet and social media, and the companies have not been particularly responsive to moral suasion,” said Flew. 

“We see similar pressures are emerging, particularly, but not exclusively in Europe,” he added.  

The European Parliament passed a non-binding resolution in November advocating a minimum age of 16 for social media access, allowing parental consent for 13 to 15-year-olds. 

The bloc has also proposed banning addictive features such as infinite scrolling and auto-play for minors, which could lead to EU-wide enforcement against non-compliant platforms.

Pinterest CEO on using AI to reduce social media harms

Outside Europe, Malaysia and New Zealand have also been advancing proposals to ban social media for children under 16.

However, laws elsewhere are expected to differ from Australia’s, whether that be regarding age restrictions or age verification processes. 

“My hope is that countries that are looking at implementing similar policies will monitor for what doesn’t work in Australia and learn from our mistakes,” said Tama Leaver, professor at the Department of Internet Studies at Curtin University and a Chief Investigator in the ARC Centre of Excellence for the Digital Child.

“I think platforms and tech companies are also starting to realize that if they don’t want age-gating policies everywhere, they’re going to have to do much better at providing safer, appropriate experiences for young users.”

Continue Reading

Technology

CNBC Daily Open: A Fed rate cut might not be festive enough

Published

on

By

CNBC Daily Open: A Fed rate cut might not be festive enough

An eagle sculpture stands on the facade of the Marriner S. Eccles Federal Reserve building in Washington, D.C., U.S., on Friday, Nov. 18, 2016.

Andrew Harrer | Bloomberg | Getty Images

On Wednesday stateside, the U.S. Federal Reserve is widely expected to lower its benchmark interest rates by a quarter percentage point to a range of 3.5%-3.75%.

However, given that traders are all but certain that the cut will happen — an 87.6% chance, to be exact, according to the CME FedWatch tool — the news is likely already priced into stocks by the market.

That means any whiff of restraint could weigh on equities. In fact, the talk in the markets is that the Fed might deliver a “hawkish cut”: lower rates while suggesting it could be a while before it cuts again.

The “dot plot,” or a projection of where Fed officials think interest rates will end up over the next few years, will be the clearest signal of any hawkishness. Investors will also parse Chair Jerome Powell’s press conference and central bankers’ estimates for U.S. economic growth and inflation to gauge the Fed’s future rate path.

In other words, the Fed could rein in market sentiment even if it cuts rates. Perhaps end-of-year festivities might be muted this year.

What you need to know today

And finally…

Researchers inside a lab at the Shenzhen Synthetic Biology Infrastructure facility in Shenzhen, China, on Wednesday, Nov. 26, 2025.

Bloomberg | Bloomberg | Getty Images

U.S.-China AI talent race heats up

When it comes to brain power, “America’s edge is deteriorating dangerously,” Chris Miller, author of the book “Chip War: The Fight for the World’s Most Critical Technology,” told a U.S. Senate Foreign Relations subcommittee last week. It’s a lead that’s “fragile and much smaller” than its advantage in AI chips, he said.

Part of the difference comes from the sheer scale, especially as education levels rise in China. Its population is four times that of the U.S., and the same goes for the volume of science, technology, engineering and mathematics graduates. In 2020, China produced 3.57 million STEM graduates, the most of any country, and far outpacing the 820,000 in the U.S.

— Evelyn Cheng

Continue Reading

Technology

CEO of South Korean online retail giant Coupang resigns over data breach

Published

on

By

CEO of South Korean online retail giant Coupang resigns over data breach

Park Dae-jun, CEO of South Korean online retail giant Coupang has resigned, three weeks after the company became aware of a massive data breach that affected nearly 34 million customers.

Coupang

The CEO of South Korean online retail giant Coupang Corp. resigned Wednesday, three weeks after the company became aware of a massive data breach that affected nearly 34 million customers.

Coupang said CEO Park Dae-jun resigned due to the data breach incident — which was revealed on Nov. 18 — according to a Google translation of the statement in Korean.

“I am deeply sorry for disappointing the public with the recent personal information incident,” Park said, adding, “I feel a deep sense of responsibility for the outbreak and the subsequent recovery process, and I have decided to step down from all positions.”

Following his resignation, parent company Coupang Inc. appointed Harold Rogers, the Chief Administrative Officer and General Counsel, as interim CEO.

Coupang said that Rogers plans to “focus on alleviating customer anxiety caused by the personal information leak” and to stabilize the organisation.

Park, who joined the company in 2012, became Coupang’s sole CEO in May, after the company transitioned away from a dual-CEO system.

According to Coupang, he was responsible for the company’s innovative new business and regional infrastructure development, and led projects to expand sales channels for small and medium enterprises, among others.

South Korean companies are known for being “very, very cost-efficient,” which may have led to neglecting areas like cybersecurity, Peter Kim, managing director at KB Securities, told CNBC’s “Squawk Box Asia” Wednesday.

“I think the core issue here is that we’ve had a number of other breaches, not just Coupang, but previously, telecom companies in Korea,” Kim added. “I understand some data companies consider Korea to be [the] top three or four most breached on a data, on an IT security basis in the world.”

Coupang breach a ‘double-edged sword’ for Chinese rivals due to security concerns: KB Securities

South Korean companies have been hit by cybersecurity breaches before, including an April incident at mobile carrier SK Telecom that affected 23.24 million people. The country previously saw one of its largest cybersecurity incidents in 2011, when attackers stole over 35 million user details from internet platforms Nate and Cyworld.

Nate is one of the most popular search engines in South Korea, while Cyworld was one of the country’s largest social networking sites in the early 2000s.

Prime Minister Kim Min-seok reportedly said Wednesday that strict action would be taken against the company if violations of the law were found, according to South Korean media outlet Yonhap.

Police also raided the Coupang headquarters for a second day on Wednesday, continuing their investigation into the data breach.

Yonhap also reported, citing sources, that the police search warrant “specifies a Chinese national who formerly worked for Coupang as a suspect on charges of breaching the information and communications network and leaking confidential data.”

Last week, South Korean President Lee Jae Myung called for increased penalties on data breaches, saying that the Coupang data breach had served as a wake-up call.

— CNBC’s Chery Kang contributed to this report.

How Coupang grew into South Korea's biggest online retailer

Continue Reading

Trending