U.S. President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas, on Nov. 19, 2024.
Brandon Bell | Via Reuters
The U.S. political landscape is set to undergo some shifts in 2025 — and those changes will have some major implications for the regulation of artificial intelligence.
President-elect Donald Trump will be inaugurated on Jan. 20. Joining him in the White House will be a raft of top advisors from the world of business — including Elon Musk and Vivek Ramaswamy — who are expected to influence policy thinking around nascent technologies such as AI and cryptocurrencies.
Across the Atlantic, a tale of two jurisdictions has emerged, with the U.K. and European Union diverging in regulatory thinking. While the EU has taken more of a heavy hand with the Silicon Valley giants behind the most powerful AI systems, Britain has adopted a more light-touch approach.
In 2025, the state of AI regulation globally could be in for a major overhaul. CNBC takes a look at some of the key developments to watch — from the evolution of the EU’s landmark AI Act to what a Trump administration could do for the U.S.
Musk’s U.S. policy influence
Elon Musk walks on Capitol Hill on the day of a meeting with Senate Republican Leader-elect John Thune (R-SD), in Washington, U.S. December 5, 2024.
Benoit Tessier | Reuters
Although it’s not an issue that featured very heavily during Trump’s election campaign, artificial intelligence is expected to be one of the key sectors set to benefit from the next U.S. administration.
For one, Trump appointed Musk, CEO of electric car manufacturer Tesla, to co-lead his “Department of Government Efficiency” alongside Ramaswamy, an American biotech entrepreneur who dropped out of the 2024 presidential election race to back Trump.
Matt Calkins, CEO of Appian, told CNBC Trump’s close relationship with Musk could put the U.S. in a good position when it comes to AI, citing the billionaire’s experience as a co-founder of OpenAI and CEO of xAI, his own AI lab, as positive indicators.
“We’ve finally got one person in the U.S. administration who truly knows about AI and has an opinion about it,” Calkins said in an interview last month. Musk was one of Trump’s most prominent endorsers in the business community, even appearing at some of his campaign rallies.
There is currently no confirmation on what Trump has planned in terms of possible presidential directives or executive orders. But Calkins thinks it’s likely Musk will look to suggest guardrails to ensure AI development doesn’t endanger civilization — a risk he’s warned about multiple times in the past.
“He has an unquestioned reluctance to allow AI to cause catastrophic human outcomes – he’s definitely worried about that, he was talking about it long before he had a policy position,” Calkins told CNBC.
Currently, there is no comprehensive federal AI legislation in the U.S. Rather, there’s been a patchwork of regulatory frameworks at the state and local level, with numerous AI bills introduced across 45 states plus Washington D.C., Puerto Rico and the U.S. Virgin Islands.
The EU AI Act
The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.
Jaque Silva | Nurphoto | Getty Images
The European Union has so far been the only jurisdiction globally to push forward with comprehensive statutory rules for the AI industry. Earlier this year, the bloc’s AI Act — a first-of-its-kind AI regulatory framework — officially entered into force.
The law isn’t yet fully in force yet, but it’s already causing tension among large U.S. tech companies, who are concerned that some aspects of the regulation are too strict and may quash innovation.
In December, the EU AI Office, a newly created body overseeing models under the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.
The second draft included exemptions for providers of certain open-source AI models. Such models are typically available to the public to allow developers to build their own custom versions. It also includes a requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.
The Computer & Communications Industry Association — whose members include Amazon, Google and Meta — warned it “contains measures going far beyond the Act’s agreed scope, such as far-reaching copyright measures.”
The AI Office wasn’t immediately available for comment when contacted by CNBC.
It’s worth noting the EU AI Act is far from reaching full implementation.
As Shelley McKinley, chief legal officer of popular code repository platform GitHub, told CNBC in November, “the next phase of the work has started, which may mean there’s more ahead of us than there is behind us at this point.”
For example, in February, the first provisions of the Act will become enforceable. These provisions cover “high-risk” AI applications such as remote biometric identification, loan decisioning and educational scoring. A third draft of the code on GPAI models is slated for publication that same month.
European tech leaders are concerned about the risk that punitive EU measures on U.S. tech firms could provoke a reaction from Trump, which might in turn cause the bloc to soften its approach.
Take antitrust regulation, for example. The EU’s been an active player taking action to curb U.S. tech giants’ dominance — but that’s something that could result in a negative response from Trump, according to Swiss VPN firm Proton’s CEO Andy Yen.
“[Trump’s] view is he probably wants to regulate his tech companies himself,” Yen told CNBC in a November interview at the Web Summit tech conference in Lisbon, Portugal. “He doesn’t want Europe to get involved.”
UK copyright review
Britain’s Prime Minister Keir Starmer gives a media interview while attending the 79th United Nations General Assembly at the United Nations Headquarters in New York, U.S. September 25, 2024.
However, Keir Starmer’s government has said it plans to draw up legislation for AI, although details remain thin for now. The general expectation is that the U.K. will take a more principles-based approach to AI regulation, as opposed to the EU’s risk-based framework.
Most LLMs use public data from the open web to train their AI models. But that often includes examples of artwork and other copyrighted material. Artists and publishers like the New York Times allege that these systems are unfairly scraping their valuable content without consent to generate original output.
To address this issue, the U.K. government is considering making an exception to copyright law for AI model training, while still allowing rights holders to opt out of having their works used for training purposes.
Appian’s Calkins said that the U.K. could end up being a “global leader” on the issue of copyright infringement by AI models, adding that the country isn’t “subject to the same overwhelming lobbying blitz from domestic AI leaders that the U.S. is.”
U.S.-China relations a possible point of tension
U.S. President Donald Trump, right, and Xi Jinping, China’s president, walk past members of the People’s Liberation Army (PLA) during a welcome ceremony outside the Great Hall of the People in Beijing, China, on Thursday, Nov. 9, 2017.
Qilai Shen | Bloomberg | Getty Images
Lastly, as world governments seek to regulate fast-growing AI systems, there’s a risk geopolitical tensions between the U.S. and China may escalate under Trump.
In his first term as president, Trump enforced a number of hawkish policy measures on China, including a decision to add Huawei to a trade blacklist restricting it from doing business with American tech suppliers. He also launched a bid to ban TikTok,which is owned by Chinese firm ByteDance, in the U.S. — although he’s since softened his position on TikTok.
China is racing to beat the U.S. for dominance in AI. At the same time, the U.S. has taken measures to restrict China’s access to key technologies, mainly chips like those designed by Nvidia, which are required to train more advanced AI models. China has responded by attempting to build its own homegrown chip industry.
Technologists worry that a geopolitical fracturing between the U.S. and China on artificial intelligence could result in other risks, such as the potential for one of the two to develop a form of AI smarter than humans.
Max Tegmark, founder of the nonprofit Future of Life Institute, believes the U.S. and China could in future create a form of AI that can improve itself and design new systems without human supervision, potentially forcing both countries’ governments to individually come up with rules around AI safety.
“My optimistic path forward is the U.S. and China unilaterally impose national safety standards to prevent their own companies from doing harm and building uncontrollable AGI, not to appease the rivals superpowers, but just to protect themselves,” Tegmark told CNBC in a November interview.
Governments are already trying to work together to figure out how to create regulations and frameworks around AI. In 2023, the U.K. hosted a global AI safety summit, which the U.S. and China administrations both attended, to discuss potential guardrails around the technology.
An artificial intelligence feature on iPhones is generating fake news alerts, stoking concerns about the technology’s ability to spread misinformation.
Last week, a feature recently launched by Apple that summarizes users’ notifications using AI, pushed out inaccurately summarized BBC News app notifications on the broadcaster’s story about the PDC World Darts Championship semi-final, falsely claiming British darts player Luke Littler had won the championship.
The incident happened a day before the actual tournament’s final, which Littler did go on to win.
Then, just hours after that incident occurred, a separate notification generated by Apple Intelligence, the tech giant’s AI system, falsely claimed that Tennis legend Rafael Nadal had come out as gay.
The BBC has been trying for about a month to get Apple to fix the problem. The British state broadcaster complained to Apple in December after its AI feature generated a false headline suggesting that Luigi Mangione, the man arrested following the murder of health insurance firm UnitedHealthcare CEO Brian Thompson in New York, had shot himself — which never happened.
Apple was not immediately available for comment when contacted by CNBC. On Monday, Apple told the BBC that it’s working on an update to resolve the problem by adding a clarification that shows when Apple Intelligence is responsible for the text displayed in the notifications. Currently, generated news notifications show up as coming directly from the source.
“Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback,” the company said in a statement shared with the BBC. Apple added that it’s encouraging users to report a concern if they view an “unexpected notification summary.”
The mistake was flagged on the social media app Bluesky by Ken Schwencke, a senior editor at investigative journalism site ProPublica.
CNBC has reached out to the BBC and New York Times for comment on Apple’s proposed solution to its AI feature’s misinformation issue.
AI’s misinformation problem
Apple touts its AI-generated notification summaries as an effective way to group and rewrite previews of news app notifications into a single alert on a users’ lock screen.
It’s a feature Apple says is designed to help users scan their notifications for key details and cut down on the overwhelming barrage of updates many smartphone users are familiar with.
However, this has resulted in what AI experts refer to as “hallucinations” — responses generated by AI that contain false or misleading information.
“I suspect that Apple will not be alone in having challenges with AI-generated content. We’ve already seen numerous examples of AI services confidently telling mistruths, so-called ‘hallucinations’,” Ben Wood, chief analyst at tech-focused market research firm CCS Insights, told CNBC.
In Apple’s case, because the AI is trying to consolidate notifications and condense them to show only a basic summary of information, it’s mashed the words together in a way that’s inaccurately characterized the events — but confidently presenting them as facts.
“Apple had the added complexity of trying to compress content into very short summaries, which ended up delivering erroneous messages,” Wood added. “Apple will undoubtedly seek to address this as soon as possible, and I’m sure rivals will be watching closely to see how it responds.”
Generative AI works by trying to figure out the best possible answer to a question or prompt inserted by a user, relying on vast quantities of data which its underlying large language models are trained on.
Sometimes the AI might not know the answer. But because it’s been programmed to always present a response to user prompts, this can result in cases where the AI effectively lies.
It’s not clear exactly when Apple’s resolution to the bug in its notification summarization feature will be fixed. The iPhone maker said to expect one to arrive in “the coming weeks.”
OpenAI CEO Sam Altman visits “Making Money With Charles Payne” at Fox Business Network Studios in New York on Dec. 4, 2024.
Mike Coppola | Getty Images
OpenAI CEO Sam Altman’s sister, Ann Altman, filed a lawsuit on Monday, alleging that her brother sexually abused her regularly between the years of 1997 and 2006.
The lawsuit, which was filed in U.S. District Court in the Eastern District of Missouri, alleges that the abuse took place at the family’s home in Clayton, Missouri, and began when Ann, who goes by Annie, was three and Sam was 12. The filing claims that the abusive activities took place “several times per week,” beginning with oral sex and later involving penetration.
The lawsuit claims that “as a direct and proximate result of the foregoing acts of sexual assault,” the plaintiff has experienced “severe emotional distress, mental anguish, and depression, which is expected to continue into the future.”
The younger Altman has publicly made similar sexual assault allegations against her brother in the past on platforms like X, but this is the first time she’s taken him to court. She’s being represented by Ryan Mahoney, whose Illinois-based firm specializes in matters including sexual assault and harassment.
The lawsuit requests a jury trial and damages in excess of $75,000.
In a joint statement on X with his mother, Connie, and his brothers Jack and Max, Sam Altman denied the allegations.
“Annie has made deeply hurtful and entirely untrue claims about our family, and especially Sam,” the statement said. “We’ve chosen not to respond publicly, out of respect for her privacy and our own. However, she has now taken legal action against Sam, and we feel we have no choice but to address this.”
Their response says “all of these claims are utterly untrue,” adding that “this situation causes immense pain to our entire family.” They said that Ann Altman faces “mental health challenges” and “refuses conventional treatment and lashes out at family members who are genuinely trying to help.”
Sam Altman has gained international prominence since OpenAI’s debut of the artificial intelligence chatbot ChatGPT in November 2022. Backed by Microsoft, the company was most recently valued at $157 billion, with funding coming from Thrive Capital, chipmaker Nvidia, SoftBank and others.
Altman was briefly ousted from the CEO role by OpenAI’s board in November 2023, but was quickly reinstated due to pressure from investors and employees.
This isn’t the only lawsuit the tech exec faces.
In March, Tesla and SpaceX CEO Elon Musk sued OpenAI and co-founders Altman and Greg Brockman, alleging breach of contract and fiduciary duty. Musk, who now runs a competing AI startup, xAI, was a co-founder of OpenAI when it began as a nonprofit in 2015. Musk left the board in 2018 and has publicly criticized OpenAI for allegedly abandoning its original mission.
Musk is suing to keep OpenAI from turning into a for-profit company. In June, Musk withdrew the original complaint filed in a San Francisco state court and later refiled in federal court.
Last month, OpenAI clapped back against Musk, claiming in a blog post that in 2017 Musk “not only wanted, but actually created, a for-profit” to serve as the company’s proposed new structure.
This photo illustration created on January 7, 2025, in Washington, DC, shows an image of Mark Zuckerberg, CEO of Meta, and an image of the Meta logo.
Drew Angerer | Afp | Getty Images
Meta employees took to their internal forum on Tuesday, criticizing the company’s decision to end third-party fact-checking on its services two weeks before President-elect Donald Trump’s inauguration.
Company employees voiced their concern after Joel Kaplan, Meta’s new chief global affairs officer and former White House deputy chief of staff under former President George W. Bush, announced the content policy changes on Workplace, the in-house communications tool.
“We’re optimistic that these changes help us return to that fundamental commitment to free expression,” Kaplan wrote in the post, which was reviewed by CNBC.
The content policy announcement follows a string of decisions that appear targeted to appease the incoming administration. On Monday, Meta added new members to its board, including UFC CEO Dana White, a longtime friend of Trump, and the company confirmed last month that it was contributing $1 million to Trump’s inauguration.
Among the latest changes, Kaplan announced that Meta will scrap its fact-checking program and shift to a user-generated system like X’s Community Notes. Kaplan, who took over his new role last week, also said that Meta will lift restrictions on certain topics and focus its enforcement on illegal and high-severity violations while giving users “a more personalized approach to political content.”
One worker wrote they were “extremely concerned” about the decision, saying it appears Meta is “sending a bigger, stronger message to people that facts no longer matter, and conflating that with a victory for free speech.”
Another employee commented that by “simply absolving ourselves from the duty to at least try to create a safe and respective platform is a really sad direction to take.” Other comments expressed concern about the impact the policy change could have on the discourse around topics like immigration, gender identity and gender, which, according to one employee, could result in an “influx of racist and transphobic content.”
A separate employee said they were scared that “we’re entering into really dangerous territory by paving the way for the further spread of misinformation.”
The changes weren’t universally criticized, as some Meta workers congratulated the company’s decision to end third-party fact checking. One wrote that X’s Community Notes feature has “proven to be a much better representation of the ground truth.”
Another employee commented that the company should “provide an accounting of the worst outcomes of the early years” that necessitated the creation of a third-party fact-checking program and whether the new policies would prevent the same type of fall out from happening again.
As part of the company’s massive layoffs in 2023, Meta also scrapped an internal fact-checking project, CNBC reported. That project would have let third-party fact checkers like the Associated Press and Reuters, in addition to credible experts, comment on flagged articles in order to verify the content.
Although Meta announced the end of its fact-checking program on Tuesday, the company had already been pulling it back. In September, a spokesperson for the AP told CNBC that the news agency’s “fact-checking agreement with Meta ended back in January” 2024.
Dana White, CEO of the Ultimate Fighting Championship gestures as he speaks during a rally for Republican presidential nominee and former U.S. President Donald Trump at Madison Square Garden, in New York, U.S., Oct. 27, 2024.
Andrew Kelly | Reuters
After the announcement of White’s addition to the board on Monday, employees also posted criticism, questions and jokes on Workplace, according to posts reviewed by CNBC.
White, who has led UFC since 2001, became embroiled in controversy in 2023 after a video published by TMZ showed him slapping his wife at a New Year’s Eve party in Mexico. White issued a public apology, and his wife, Anne White, issued a statement to TMZ, calling it an isolated incident.
Commenters on Workplace made jokes asking whether performance reviews would now involve mixed martial arts style fights.
In addition to White, John Elkann, the CEO of Italian auto holding company Exor, was named to Meta’s board.
Some employees asked what value autos and entertainment executives could bring to Meta, and whether White’s addition reflects the company’s values. One post suggested the new board appointments would help with political alliances that could be valuable but could also change the company culture in unintended or unwanted ways.
Comments in Workplace alluding to White’s personal history were flagged and removed from the discussion, according to posts from the internal app read by CNBC.
An employee who said he was with Meta’s Internal Community Relations team, posted a reminder to Workplace about the company’s “community engagement expectations” policy, or CEE, for using the platform.
“Multiple comments have been flagged by the community for review,” the employee posted. “It’s important that we maintain a respectful work environment where people can do their best work.”
The internal community relations team member added that “insulting, criticizing, or antagonizing our colleagues or Board members is not aligned with the CEE.”
Several workers responded to that note saying that even respectful posts, if critical, had been removed, amounting to a corporate form of censorship.
One worker said that because critical comments were being removed, the person wanted to voice support for “women and all voices.”
Meta declined to comment.
— CNBC’s Salvador Rodriguez contributed to this report.