The U.K. government on Wednesday published recommendations for the artificial intelligence industry, outlining an all-encompassing approach for regulating the technology at a time when it has reached frenzied levels of hype.
In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
related investing news
15 hours ago
Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.
It has tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority with coming up with “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”
“Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors,” the government said.
“When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.”
Maya Pindeus, CEO and co-founder of AI startup Humanising Autonomy, said the government’s move marked a “first step” toward regulating AI.
“There does need to be a bit of a stronger narrative,” she said. “I hope to see that. This is kind of planting the seeds for this.”
However, she added, “Regulating technology as technology is incredibly difficult. You want it to advance; you don’t want to hinder any advancements when it impacts us in certain ways.”
The arrival of the recommendations is timely. ChatGPT, the popular AI chatbot developed by the Microsoft-backed company OpenAI, has driven a wave of demand for the technology, and people are using the tool for everything from penning school essays to drafting legal opinions.
ChatGPT has already become one of the fastest-growing consumer applications of all time, attracting 100 million monthly active users as of February. But experts have raised concerns about the negative implications of the technology, including the potential for plagiarism and discrimination against women and ethnic minorities.
AI ethicists are worried about biases in the data that trains AI models. Algorithms have been shown to have a tendency of being skewed in favor men — especially white men — putting women and minorities at a disadvantage.
Fears have also been raised about the possibility of jobs being lost to automation. On Tuesday, Goldman Sachs warned that as many as 300 million jobs could be at risk of being wiped out by generative AI products.
The government wants companies that incorporate AI into their businesses to ensure they provide an ample level of transparency about how their algorithms are developed and used. Organizations “should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI,” the DSIT said.
Companies should also offer users a way to contest rulings taken by AI-based tools, the DSIT said. User-generated platforms like Facebook, TikTok and YouTube often use automated systems to remove content flagged up as being against their guidelines.
AI, which is believed to contribute £3.7 billion ($4.6 billion) to the U.K. economy each year, should also “be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes,” the DSIT added.
On Monday, Secretary of State Michelle Donelan visited the offices of AI startup DeepMind in London, a government spokesperson said.
“Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely,” Donelan said in a statement Wednesday.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
Lila Ibrahim, chief operating officer of DeepMind and a member of the U.K.’s AI Council, said AI is a “transformational technology,” but that it “can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.”
“The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks,” Ibrahim said.
Not everyone is convinced by the U.K. government’s approach to regulating AI. John Buyers, head of AI at the law firm Osborne Clarke, said the move to delegate responsibility for supervising the technology among regulators risks creating a “complicated regulatory patchwork full of holes.”
“The risk with the current approach is that an problematic AI system will need to present itself in the right format to trigger a regulator’s jurisdiction, and moreover the regulator in question will need to have the right enforcement powers in place to take decisive and effective action to remedy the harm caused and generate a sufficient deterrent effect to incentivise compliance in the industry,” Buyers told CNBC via email.
By contrast, the EU has proposed a “top down regulatory framework” when it comes to AI, he added.
Digital illustration of a glowing world map with “AI” text across multiple continents, representing the global presence and integration of artificial intelligence.
Fotograzia | Moment | Getty Images
As artificial intelligence becomes more democratized, it is important for emerging economies to build their own “sovereign AI,” panelists told CNBC’s East Tech West conference in Bangkok, Thailand, on Friday.
In general, sovereign AI refers to a nation’s ability to control its own AI technologies, data and related infrastructure, ensuring strategic autonomy while meeting its unique priorities and security needs.
However, this sovereignty has been lacking, according to panelist Kasima Tharnpipitchai, head of AI strategy at SCB 10X, the technology investment arm of Thailand-based SCBX Group. He noted that many of the world’s most prominent large language models, operated by companies such as Anthropic and OpenAI, are based on the English language.
“The way you think, the way you interact with the world, the way you are when you speak another language can be very different,” Tharnpipitchai said.
It is, therefore, important for countries to take ownership of their AI systems, developing technology for specific languages, cultures, and countries, rather than just translating over English-based models.
Panelists agreed that the digitally savvy ASEAN region, with a total population of nearly 700 million people, is particularly well positioned to build its sovereign AI. People under the age of 35 make up around 61% of the population, and about 125,000 new users gain access to the internet daily.
Given this context, Jeff Johnson, managing director of ASEAN at Amazon Web Services, said, “I think it’s really important, and we’re really focused on how we can really democratize access to cloud and AI.”
Open-source models
According to panelists, one key way that countries can build up their sovereign AI environments is through the use of open-source AI models.
“There is plenty of amazing talent here in Southeast Asia and in Thailand, especially. To have that captured in a way that isn’t publicly accessible or ecosystem developing would feel like a shame,” said SCB 10X’s Tharnpipitchai.
Doing open-source is a way to create a “collective energy” to help Thailand better compete in AI and push sovereignty in a way that is beneficial for the entire country, he added.
Open-source generally refers to software in which the source code is made freely available, allowing anyone to view, modify and redistribute it. LLM players, such as China’s DeepSeek and Meta’s Llama, advertise their models as open-source, albeit with some restrictions.
The emergence of more open-source models offers companies and governments more options compared to relying on a few closed models, according to Cecily Ng, vice president and general manager of ASEAN & Greater China at software vendor Databricks.
AI experts have previously told CNBC that open-source AI has helped China boost AI adoption, better develop its AI ecosystem and compete with the U.S.
Access to computing
Prem Pavan, vice president and general manager of Southeast Asia and Korea at Red Hat, said that the localization of AI had been focused on language until recently. Having sovereign access to AI models powered by local hardware and computing is more important today, he added.
Panelists said that for emerging countries like Thailand, AI localization can be offered by cloud computing companies with domestic operations. These include global hyperscalers such as AWS, Microsoft Azure and Tencent Cloud, and sovereign players like AIS Cloud and True IDC.
“We’re here in Thailand and across Southeast Asia to support all industries, all businesses of all shapes and sizes, from the smallest startup to the largest enterprise,” said AWS’s Johnson.
He added that the economic model of the company’s cloud services makes it easy to “pay for what you use,” thus lowering the barriers to entry and making it very easy to build models and applications.
In April, the U.N. Trade and Development Agency said in a report that AI was projected to reach $4.8 trillion in market value by 2033. However, it warned that the technology’s benefits remain highly concentrated, with nations at risk of lagging behind.
Among UNCTAD’s recommendations to the international community for driving inclusive growth was shared AI infrastructure, the use of open-source AI models and initiatives to share AI knowledge and resources.
Amazon CEO Andy Jassy said the rapid rollout of generative artificial intelligence means the company will one day require fewer employees to do some of the work that computers can handle.
“Like with every technical transformation, there will be fewer people doing some of the jobs that the technology actually starts to automate,” Jassy told CNBC’s Jim Cramer in an interview on Monday. “But there’s going to be other jobs.”
Even as AI eliminates the need for some roles, Amazon will continue to hire more employees in AI, robotics and elsewhere, Jassy said.
Earlier this month, Jassy admitted that he expects the company’s workforce to decline in the next few years as Amazon embraces generative AI and AI-powered software agents. He told staffers in a memo that it will be “hard to know exactly where this nets out over time” but that the corporate workforce will shrink as Amazon wrings more efficiencies out of the technology.
It’s a message that’s making its way across the tech sector. Salesforce CEO Marc Benioff last week claimed AI is doing 30% to 50% of the work at his software vendor. Other companies such as Shopify and Microsoft have urged employees to adopt the technology in their daily work. The CEO of Klarna said in May that the online lender has managed to shrink its headcount by about 40%, in part due to investments in AI and natural attrition in its workforce.
Jassy said on Monday that AI will free employees from “rote work” and “make all our jobs more interesting,” while enabling staffers to invent better services more quickly than before.
Amazon and other tech companies have also been shrinking their workforces through rolling layoffs over the past several years. Amazon has cut more than 27,000 jobs since the start of 2022, and it’s announced smaller, more targeted layoffs in its retail and devices units in recent months.
Amazon shares are flat so far this year, underperforming the Nasdaq, which has gained 5.5%. The stock is about 10% below its record reached in February, while fellow megacaps Meta, Microsoft and Nvidia are all trading at or very near record highs.
Traders work on the floor at the New York Stock Exchange (NYSE), on the day of Circle Internet Group’s IPO, in New York City, U.S., June 5, 2025.
Brendan McDermid | Reuters
Stablecoin issuer Circle Internet Group has applied for a national trust bank charter, moving forward on its mission to bring stablecoins into the traditional financial world after the firm’s big market debut this month, CNBC confirmed.
Shares rose 1% after hours.
If the Office of the Comptroller of the Currency grants the bank charter, Circle will establish the First National Digital Currency Bank, N.A. Under the charter, Circle, which issues the USDC stablecoin, will also be able to offer custody services in the future to institutional clients for assets, which could include representations of stocks and bonds on a blockchain network.
Reuters first reported on Circle’s bank charter application.
There are no plans to change the management of Circle’s USDC reserves, which are currently held with other major banks.
Circle’s move comes after a wildly successful IPO and debut trading month on the public markets. Shares of the company are up 484% in June. The company is also benefiting from a wave of optimism after the Senate’s passage of the GENIUS Act, which would give the U.S. a regulatory framework for stablecoins.
Having a federally regulated trust charter would also help Circle meet requirements under the GENIUS Act.
“Establishing a national digital currency trust bank of this kind marks a significant milestone in our goal to build an internet financial system that is transparent, efficient and accessible,” Circle CEO Jeremy Allaire said in a statement shared with CNBC. “By applying for a national trust charter, Circle is taking proactive steps to further strengthen our USDC infrastructure.”
“Further, we will align with emerging U.S. regulation for the issuance and operation of dollar-denominated payment stablecoins, which we believe can enhance the reach and resilience of the U.S. dollar, and support the development of crucial, market neutral infrastructure for the world’s leading institutions to build on,” he said.
Don’t miss these cryptocurrency insights from CNBC Pro: