British Prime Minister Rishi Sunak delivers a speech on artificial intelligence at the Royal Society, Carlton House Terrace, on Oct. 26, 2023, in London.
Peter Nicholls | Getty Images News | Getty Images
The U.K. is set to hold its landmark artificial intelligence summit this week, as political leaders and regulators grow more and more concerned by the rapid advancement of the technology.
The two-day summit, which takes place on Nov. 1 and Nov. 2, will host government officials and companies from around the world, including the U.S. and China, two superpowers in the race to develop cutting-edge AI technologies.
It is Prime Minister Rishi Sunak’s chance to make a statement to the world on the U.K.’s role in the global conversation surrounding AI, and how the technology should be regulated.
Ever since the introduction of Microsoft-backed OpenAI’s ChatGPT, the race toward the regulation of AI from global policymakers has intensified.
Of particular concern is the potential for the technology to replace — or undermine — human intelligence.
Where it’s being held
The AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London.
Bletchley Park was a codebreaking facility during World War II.
Getty
It’s the location where, in 1941, a group of codebreakers led by British scientist and mathematician Alan Turing cracked Nazi Germany’s notorious Enigma machine.
It’s also no secret that the U.K. is holding the summit at Bletchley Park because of the site’s historical significance — it sends a clear message that the U.K. wants to reinforce its position as a global leader in innovation.
What it seeks to address
The main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.
The summit is squarely focused on so-called “frontier AI” models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere.
It will look to address two key categories of risk when it comes to AI: misuse and loss of control.
Misuse risks involve a bad actor being aided by new AI capabilities. For example, a cybercriminal could use AI to develop a new type of malware that cannot be detected by security researchers, or be used to help state actors develop dangerous bioweapons.
Loss of control risks refer to a situation in which the AI that humans create could be turned against them. This could “emerge from advanced systems that we would seek to be aligned with our values and intentions,” the government said.
Who’s going?
Major names in the technology and political world will be there.
U.S. Vice President Kamala Harris speaks during the conclusion of the Investing in America tour at Coppin State University in Baltimore, Maryland, on July 14, 2023.
A Chinese government delegation from the Ministry of Science and Technology
European Commission President Ursula von der Leyen
Who won’t be there?
Several leaders have opted not to attend the summit.
French President Emmanuel Macron.
Chesnot | Getty Images News | Getty Images
They include:
U.S. President Joe Biden
Canadian Prime Minister Justin Trudeau
French President Emmanuel Macron
German Chancellor Olaf Scholz
When asked whether Sunak feels snubbed by his international counterparts, his spokesperson told reporters Monday, “No, not at all.”
“I think we remain confident that we have brought together the right group of world experts in the AI space, leading businesses and indeed world leaders and representatives who will be able to take on this vital issue,” the spokesperson said.
“This is the first AI safety summit of its kind and I think it is a significant achievement that for the first time people from across the world and indeed from across a range of world leaders and indeed AI experts are coming together to look at these frontier risks.”
Will it succeed?
The British government wants the AI Summit to serve as a platform to shape the technology’s future. It will emphasize safety, ethics, and responsible development of AI, while also calling for collaboration at a global level.
Sunak is hoping that the summit will provide a chance for Britain and its global counterparts to find some agreement on how best to develop AI safely and responsibly, and apply safeguards to the technology.
In a speech last week, the prime minister warned that AI “will bring a transformation as far reaching as the industrial revolution, the coming of electricity, or the birth of the internet” — while adding there are risks attached.
“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said.
Sunak announced the U.K. will set up the world’s first AI safety institute to evaluate and test new types of AI in order to understand the risks.
He also said he would seek to set up a global expert panel nominated by countries and organizations attending the AI summit this week, which would publish a state of AI science report.
A particular point of contention surrounding the summit is Sunak’s decision to invite China — which has been at the center of a geopolitical tussle over technology with the U.S. — to the summit. Sunak’s spokesperson has said it is important to invite China, as the country is a world leader in AI.
International coordination on a technology as complex and multifaceted as AI may prove difficult — and it is made all the more so when two of the big attendees, the U.S. and China, are engaged in a tense clash over technology and trade.
China’s President Xi Jinping and U.S. President Joe Biden at the G20 Summit in Nusa Dua on the Indonesian island of Bali on Nov. 14, 2022.
Saul Loeb | Afp | Getty Images
Washington recently curbed sales of Nvidia’s advanced A800 and H800 artificial intelligence chips to China.
Different governments have come up with their own respective proposals for regulating the technology to combat the risks it poses in terms of misinformation, privacy and bias.
The EU is hoping to finalize its AI Act, which is set to be one of the world’s first pieces of legislation targeted specifically at AI, by the end of the year, and adopt the regulation by early 2024 before the June European Parliament elections.
Some tech industry officials think that the summit is too limited in its focus. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI.
“I do think that by focusing just on frontier models, we’re basically missing a large piece of the jigsaw,” Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC in an interview last week.
“By focusing only on companies that are currently building frontier models and are leading that development right now, we’re also saying no one else can come and build the next generation of frontier models.”
Some are frustrated by the summit’s focus on “existential threats” surrounding artificial intelligence and think the government should address more pressing, immediate-term risks, such as the potential for deepfakes to manipulate 2024 elections.
Photo by Carl Court
“It’s like the fire brigade conference where they talk about dealing with a meteor strike that obliterates the country,” Stefan van Grieken, CEO of generative AI firm Cradle, told CNBC.
“We should be concentrating on the real fires that are literally present threats.”
However, Marc Warner, CEO of British AI startup Faculty.ai, said he believes that focusing on the long-term, potentially devastating risks of achieving artificial general intelligence to be “very reasonable.”
“I think that building artificial general intelligence will be possible, and I think if it is possible, there is no scientific reason that we know of right now to say that it’s guaranteed safe,” Warner told CNBC.
“In some ways, it’s sort of the dream scenario that governments tackle something before it’s a problem rather than waiting until stuff gets really bad.”
Google CEO Sundar Pichai testifies before the House Judiciary Committee at the Rayburn House Office Building on December 11, 2018 in Washington, DC.
Alex Wong | Getty Images
Google’s antitrust woes are continuing to mount, just as the company tries to brace for a future dominated by artificial intelligence.
On Thursday, a federal judge ruled that Google held illegal monopolies in online advertising markets due to its position between ad buyers and sellers.
The ruling, which followed a September trial in Alexandria, Virginia, represents a second major antitrust blow for Google in under a year. In August, a judge determined the company has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoftmore than 20 years ago.
Google is in a particularly precarious spot as it tries to simultaneously defend its primary business in court while fending off an onslaught of new competition due to the emergence of generative AI, most notably OpenAI’s ChatGPT, which offers users alternative ways to search for information. Revenue growth has cooled in recent years, and Google also now faces the added potential of a slowdown in ad spending due to economic concerns from President Donald Trump’s sweeping new tariffs.
Parent company Alphabet reports first-quarter results next week. Alphabet’s stock price dipped more than 1% on Thursday and is now down 20% this year.
In Thursday’s ruling, U.S. District Judge Leonie Brinkema said Google’s anticompetitive practices “substantially harmed” publishers and users on the web. The trial featured 39 live witnesses, depositions from an additional 20 witnesses and hundreds of exhibits.
Judge Brinkema ruled that Google unlawfully controls two of the three parts of the advertising technology market: the publisher ad server market and ad exchange market. Brinkema dismissed the third part of the case, determining that tools used for general display advertising can’t clearly be defined as Google’s own market. In particular, the judge cited the purchases of DoubleClick and Admeld and said the government failed to show those “acquisitions were anticompetitive.”
“We won half of this case and we will appeal the other half,” Lee-Anne Mulholland, Google’s vice president or regulatory affairs, said in an emailed statement. “We disagree with the Court’s decision regarding our publisher tools. Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective.”
Attorney General Pam Bondi said in a press release from the DOJ that the ruling represents a “landmark victory in the ongoing fight to stop Google from monopolizing the digital public square.”
Potential ad disruption
If regulators force the company to divest parts of the ad-tech business, as the Justice Department has requested, it could open up opportunities for smaller players and other competitors to fill the void and snap up valuable market share. Amazon has been growing its ad business in recent years.
Meanwhile, Google is still defending itself against claims that its search has acted as a monopoly by creating strong barriers to entry and a feedback loop that sustained its dominance. Google said in August, immediately after the search case ruling, that it would appeal, meaning the matter can play out in court for years even after the remedies are determined.
The remedies trial, which will lay out the consequences, begins next week. The Justice Department is aiming for a break up of Google’s Chrome browser and eliminating exclusive agreements, like its deal with Apple for search on iPhones. The judge is expected to make the ruling by August.
Google CEO Sundar Pichai (L) and Apple CEO Tim Cook (R) listen as U.S. President Joe Biden speaks during a roundtable with American and Indian business leaders in the East Room of the White House on June 23, 2023 in Washington, DC.
Anna Moneymaker | Getty Images
After the ad market ruling on Thursday, Gartner’s Andrew Frank said Google’s “conflicts of interest” are apparent by how the market runs.
“The structure has been decades in the making,” Frank said, adding that “untangling that would be a significant challenge, particularly since lawyers don’t tend to be system architects.”
However, the uncertainty that comes with a potentially years-long appeals process means many publishers and advertisers will be waiting to see how things shake out before making any big decisions given how much they rely on Google’s technology.
“Google will have incentives to encourage more competition possibly by loosening certain restrictions on certain media it controls, YouTube being one of them,” Frank said. “Those kind of incentives may create opportunities for other publishers or ad tech players.”
A date for the remedies trial hasn’t been set.
Damian Rollison, senior director of market insights for marketing platform Soci, said the revenue hit from the ad market case could be more dramatic than the impact from the search case.
“The company stands to lose a lot more in material terms if its ad business, long its main source of revenue, is broken up,” Rollison said in an email. “Whereas divisions like Chrome are more strategically important.”
Jason Citron, CEO of Discord in Washington, DC, on January 31, 2024.
Andrew Caballero-Reynolds | AFP | Getty Images
The New Jersey attorney general sued Discord on Thursday, alleging that the company misled consumers about child safety features on the gaming-centric social messaging app.
The lawsuit, filed in the New Jersey Superior Court by Attorney General Matthew Platkin and the state’s division of consumer affairs, alleges that Discord violated the state’s consumer fraud laws.
Discord did so, the complaint said, by allegedly “misleading children and parents from New Jersey” about safety features, “obscuring” the risks children face on the platform and failing to enforce its minimum age requirement.
“Discord’s strategy of employing difficult to navigate and ambiguous safety settings to lull parents and children into a false sense of safety, when Discord knew well that children on the Application were being targeted and exploited, are unconscionable and/or abusive commercial acts or practices,” lawyers wrote in the legal filing.
They alleged that Discord’s acts and practices were “offensive to public policy.”
A Discord spokesperson said in a statement that the company disputes the allegations and that it is “proud of our continuous efforts and investments in features and tools that help make Discord safer.”
“Given our engagement with the Attorney General’s office, we are surprised by the announcement that New Jersey has filed an action against Discord today,” the spokesperson said.
One of the lawsuit’s allegations centers around Discord’s age-verification process, which the plaintiffs believe is flawed, writing that children under thirteen can easily lie about their age to bypass the app’s minimum age requirement.
The lawsuit also alleges that Discord misled parents to believe that its so-called Safe Direct Messaging feature “was designed to automatically scan and delete all private messages containing explicit media content.” The lawyers claim that Discord misrepresented the efficacy of that safety tool.
“By default, direct messages between ‘friends’ were not scanned at all,” the complaint stated. “But even when Safe Direct Messaging filters were enabled, children were still exposed to child sexual abuse material, videos depicting violence or terror, and other harmful content.”
The New Jersey attorney general is seeking unspecified civil penalties against Discord, according to the complaint.
The filing marks the latest lawsuit brought by various state attorneys general around the country against social media companies.
In 2023, a bipartisan coalition of over 40 state attorneys general sued Meta over allegations that the company knowingly implemented addictive features across apps like Facebook and Instagram that harm the mental well being of children and young adults.
The New Mexico attorney general sued Snap in Sep. 2024 over allegations that Snapchat’s design features have made it easy for predators to easily target children through sextortion schemes.
The following month, a bipartisan group of over a dozen state attorneys general filed lawsuits against TikTok over allegations that the app misleads consumers that its safe for children. In one particular lawsuit filed by the District of Columbia’s attorney general, lawyers allege that the ByteDance-owned app maintains a virtual currency that “substantially harms children” and a livestreaming feature that “exploits them financially.”
In January 2024, executives from Meta, TikTok, Snap, Discord and X were grilled by lawmakers during a senate hearing over allegations that the companies failed to protect children on their respective social media platforms.
Signage at 23andMe headquarters in Sunnyvale, California, U.S., on Wednesday, Jan. 27, 2021.
David Paul Morris | Bloomberg | Getty Images
The House Committee on Energy and Commerce is investigating 23andMe‘s decision to file for Chapter 11 bankruptcy protection and has expressed concern that its sensitive genetic data is “at risk of being compromised,” CNBC has learned.
Rep. Brett Guthrie, R-Ky., Rep. Gus Bilirakis, R-Fla., and Rep. Gary Palmer, R.-Ala., sent a letter to 23andMe’s interim CEO Joe Selsavage on Thursday requesting answers to a series of questions about its data and privacy practices by May 1.
The congressmen are the latest government officials to raise concerns about 23andMe’s commitment to data security, as the House Committee on Oversight and Government Reform and the Federal Trade Commission have sent the company similar letters in recent weeks.
23andMe exploded into the mainstream with its at-home DNA testing kits that gave customers insight into their family histories and genetic profiles. The company was once valued at a peak of $6 billion, but has since struggled to generate recurring revenue and establish a lucrative research and therapeutics businesses.
After filing for bankruptcy in in Missouri federal court in March, 23andMe’s assets, including its vast genetic database, are up for sale.
“With the lack of a federal comprehensive data privacy and security law, we write to express our great concern about the safety of Americans’ most sensitive personal information,” Guthrie, Bilirakis and Palmer wrote in the letter.
23andMe did not immediately respond to CNBC’s request for comment.
More CNBC health coverage
23andMe has been inundated with privacy concerns in recent years after hackers accessed the information of nearly 7 million customers in 2023.
DNA data is particularly sensitive because each person’s sequence is unique, meaning it can never be fully anonymized, according to the National Human Genome Research Institute. If genetic data falls into the hands of bad actors, it could be used to facilitate identity theft, insurance fraud and other crimes.
The House Committee on Energy and Commerce has jurisdiction over issues involving data privacy. Guthrie serves as the chairman of the committee, Palmer serves as the chairman of the Subcommittee on Oversight and Investigations and Bilirakis serves as the chairman of the Subcommittee on Commerce, Manufacturing and Trade.
The congressmen said that while Americans’ health information is protected under legislation like the Health Insurance Portability and Accountability Act, or HIPAA, direct-to-consumer companies like 23andMe are typically not covered under that law. They said they feel “great concern” about the safety of the company’s customer data, especially given the uncertainty around the sale process.
23andMe has repeatedly said it will not change how it manages or protects consumer data throughout the transaction. Similarly, in a March release, the company said all potential buyers must agree to comply with its privacy policy and applicable law.
“To constitute a qualified bid, potential buyers must, among other requirements, agree to comply with 23andMe’s consumer privacy policy and all applicable laws with respect to the treatment of customer data,” 23andMe said in the release.
23andMe customers can still delete their account and accompanying data through the company’s website. But Guthrie, Bilirakis and Palmer said there are reports that some users have had trouble doing so.
“Regardless of whether the company changes ownership, we want to ensure that customer access and deletion requests are being honored by 23andMe,” the congressmen wrote.