Connect with us

Published

on

U.S. President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas, on Nov. 19, 2024.

Brandon Bell | Via Reuters

The U.S. political landscape is set to undergo some shifts in 2025 — and those changes will have some major implications for the regulation of artificial intelligence.

President-elect Donald Trump will be inaugurated on Jan. 20. Joining him in the White House will be a raft of top advisors from the world of business — including Elon Musk and Vivek Ramaswamy — who are expected to influence policy thinking around nascent technologies such as AI and cryptocurrencies.

Across the Atlantic, a tale of two jurisdictions has emerged, with the U.K. and European Union diverging in regulatory thinking. While the EU has taken more of a heavy hand with the Silicon Valley giants behind the most powerful AI systems, Britain has adopted a more light-touch approach.

In 2025, the state of AI regulation globally could be in for a major overhaul. CNBC takes a look at some of the key developments to watch — from the evolution of the EU’s landmark AI Act to what a Trump administration could do for the U.S.

Musk’s U.S. policy influence

Elon Musk walks on Capitol Hill on the day of a meeting with Senate Republican Leader-elect John Thune (R-SD), in Washington, U.S. December 5, 2024. 

Benoit Tessier | Reuters

Although it’s not an issue that featured very heavily during Trump’s election campaign, artificial intelligence is expected to be one of the key sectors set to benefit from the next U.S. administration.

For one, Trump appointed Musk, CEO of electric car manufacturer Tesla, to co-lead his “Department of Government Efficiency” alongside Ramaswamy, an American biotech entrepreneur who dropped out of the 2024 presidential election race to back Trump.

Matt Calkins, CEO of Appian, told CNBC Trump’s close relationship with Musk could put the U.S. in a good position when it comes to AI, citing the billionaire’s experience as a co-founder of OpenAI and CEO of xAI, his own AI lab, as positive indicators.

“We’ve finally got one person in the U.S. administration who truly knows about AI and has an opinion about it,” Calkins said in an interview last month. Musk was one of Trump’s most prominent endorsers in the business community, even appearing at some of his campaign rallies.

There is currently no confirmation on what Trump has planned in terms of possible presidential directives or executive orders. But Calkins thinks it’s likely Musk will look to suggest guardrails to ensure AI development doesn’t endanger civilization — a risk he’s warned about multiple times in the past.

“He has an unquestioned reluctance to allow AI to cause catastrophic human outcomes – he’s definitely worried about that, he was talking about it long before he had a policy position,” Calkins told CNBC.

Currently, there is no comprehensive federal AI legislation in the U.S. Rather, there’s been a patchwork of regulatory frameworks at the state and local level, with numerous AI bills introduced across 45 states plus Washington D.C., Puerto Rico and the U.S. Virgin Islands.

The EU AI Act

The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.

Jaque Silva | Nurphoto | Getty Images

The European Union has so far been the only jurisdiction globally to push forward with comprehensive statutory rules for the AI industry. Earlier this year, the bloc’s AI Act — a first-of-its-kind AI regulatory framework — officially entered into force.

The law isn’t yet fully in force yet, but it’s already causing tension among large U.S. tech companies, who are concerned that some aspects of the regulation are too strict and may quash innovation.

In December, the EU AI Office, a newly created body overseeing models under the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.

The second draft included exemptions for providers of certain open-source AI models. Such models are typically available to the public to allow developers to build their own custom versions. It also includes a requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.

The Computer & Communications Industry Association — whose members include Amazon, Google and Meta — warned it “contains measures going far beyond the Act’s agreed scope, such as far-reaching copyright measures.”

The AI Office wasn’t immediately available for comment when contacted by CNBC.

It’s worth noting the EU AI Act is far from reaching full implementation.

As Shelley McKinley, chief legal officer of popular code repository platform GitHub, told CNBC in November, “the next phase of the work has started, which may mean there’s more ahead of us than there is behind us at this point.”

For example, in February, the first provisions of the Act will become enforceable. These provisions cover “high-risk” AI applications such as remote biometric identification, loan decisioning and educational scoring. A third draft of the code on GPAI models is slated for publication that same month.

European tech leaders are concerned about the risk that punitive EU measures on U.S. tech firms could provoke a reaction from Trump, which might in turn cause the bloc to soften its approach.

Take antitrust regulation, for example. The EU’s been an active player taking action to curb U.S. tech giants’ dominance — but that’s something that could result in a negative response from Trump, according to Swiss VPN firm Proton’s CEO Andy Yen.

“[Trump’s] view is he probably wants to regulate his tech companies himself,” Yen told CNBC in a November interview at the Web Summit tech conference in Lisbon, Portugal. “He doesn’t want Europe to get involved.”

UK copyright review

Britain’s Prime Minister Keir Starmer gives a media interview while attending the 79th United Nations General Assembly at the United Nations Headquarters in New York, U.S. September 25, 2024.

Leon Neal | Via Reuters

One country to watch for is the U.K. Previously, Britain has shied away from introducing statutory obligations for AI model makers due to the fear that new legislation could be too restrictive.

However, Keir Starmer’s government has said it plans to draw up legislation for AI, although details remain thin for now. The general expectation is that the U.K. will take a more principles-based approach to AI regulation, as opposed to the EU’s risk-based framework.

Last month, the government dropped its first major indicator for where regulation is moving, announcing a consultation on measures to regulate the use of copyrighted content to train AI models. Copyright is a big issue for generative AI and LLMs, in particular.

Most LLMs use public data from the open web to train their AI models. But that often includes examples of artwork and other copyrighted material. Artists and publishers like the New York Times allege that these systems are unfairly scraping their valuable content without consent to generate original output.

To address this issue, the U.K. government is considering making an exception to copyright law for AI model training, while still allowing rights holders to opt out of having their works used for training purposes.

Appian’s Calkins said that the U.K. could end up being a “global leader” on the issue of copyright infringement by AI models, adding that the country isn’t “subject to the same overwhelming lobbying blitz from domestic AI leaders that the U.S. is.”

U.S.-China relations a possible point of tension

U.S. President Donald Trump, right, and Xi Jinping, China’s president, walk past members of the People’s Liberation Army (PLA) during a welcome ceremony outside the Great Hall of the People in Beijing, China, on Thursday, Nov. 9, 2017.  

Qilai Shen | Bloomberg | Getty Images

Lastly, as world governments seek to regulate fast-growing AI systems, there’s a risk geopolitical tensions between the U.S. and China may escalate under Trump.

In his first term as president, Trump enforced a number of hawkish policy measures on China, including a decision to add Huawei to a trade blacklist restricting it from doing business with American tech suppliers. He also launched a bid to ban TikTok,which is owned by Chinese firm ByteDance, in the U.S. — although he’s since softened his position on TikTok.

China is racing to beat the U.S. for dominance in AI. At the same time, the U.S. has taken measures to restrict China’s access to key technologies, mainly chips like those designed by Nvidia, which are required to train more advanced AI models. China has responded by attempting to build its own homegrown chip industry.

Technologists worry that a geopolitical fracturing between the U.S. and China on artificial intelligence could result in other risks, such as the potential for one of the two to develop a form of AI smarter than humans.

Max Tegmark, founder of the nonprofit Future of Life Institute, believes the U.S. and China could in future create a form of AI that can improve itself and design new systems without human supervision, potentially forcing both countries’ governments to individually come up with rules around AI safety.

“My optimistic path forward is the U.S. and China unilaterally impose national safety standards to prevent their own companies from doing harm and building uncontrollable AGI, not to appease the rivals superpowers, but just to protect themselves,” Tegmark told CNBC in a November interview.

Governments are already trying to work together to figure out how to create regulations and frameworks around AI. In 2023, the U.K. hosted a global AI safety summit, which the U.S. and China administrations both attended, to discuss potential guardrails around the technology.

– CNBC’s Arjun Kharpal contributed to this report

Continue Reading

Technology

Sam Altman says OpenAI will top $20 billion in annualized revenue this year, hundreds of billions by 2030

Published

on

By

Sam Altman says OpenAI will top  billion in annualized revenue this year, hundreds of billions by 2030

OpenAI CEO Sam Altman speaks to media following a Q&A at the OpenAI data center in Abilene, Texas, U.S., Sept. 23, 2025.

Shelby Tauber | Reuters

OpenAI CEO Sam Altman said Thursday that the artificial intelligence startup is on track to generate more than $20 billion in annualized revenue run rate this year, with plans to grow to hundreds of billions in sales by 2030.

The company has inked more than $1.4 trillion of infrastructure deals in recent months to try and build out the data centers it says are needed to meet growing demand. The staggering sum has raised questions from investors and others in the industry about where OpenAI will come up with the money.

“We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology,” Altman wrote in a post on X. “Massive infrastructure projects take quite awhile to build, so we have to start now.”

OpenAI was founded as a nonprofit research lab in 2015, but has become one of the fastest-growing commercial entities on the planet following the launch of its chatbot ChatGPT in 2022. The startup is currently valued at $500 billion, though it’s still not profitable.

In September, OpenAI CFO Sarah Friar told CNBC that OpenAI was on track to generate $13 billion in revenue this year.

Friar caught the attention of the Trump administration this week after she saying at at event that OpenAI is looking to create an ecosystem of banks, private equity and a federal “backstop” or “guarantee” that could help the company finance its investments in cutting-edge chips. 

She clarified those comments late Wednesday, writing in a post on LinkedIn that OpenAI is not seeking a government backstop for its infrastructure commitments.

“I used the word ‘backstop’ and it muddied the point,” Friar wrote. “As the full clip of my answer shows, I was making the point that American strength in technology will come from building real industrial capacity which requires the private sector and government playing their part.”

Venture capitalist David Sacks, who is serving as President Donald Trump’s AI and crypto czar, said Thursday that there will be “no federal bailout for AI.” He wrote in a post on X that if one frontier model company in the U.S. fails, another will take its place.

Altman said Thursday that OpenAI does “not have or want government guarantees for OpenAI datacenters.” He said taxpayers should not bail out companies that make poor decisions, and that “if we get it wrong, that’s on us.”

“This is the bet we are making, and given our vantage point, we feel good about it,” Altman wrote. “But we of course could be wrong, and the market—not the government—will deal with it if we are.”

WATCH: Trump AI czar David Sacks says ‘no federal bailout for AI’ after OpenAI CFO’s comments

Trump AI czar David Sacks says ‘no federal bailout for AI’ after OpenAI CFO’s comments

Continue Reading

Technology

Apple’s AI roadmap looks brighter — plus, Costco delivers upbeat sales numbers

Published

on

By

Apple's AI roadmap looks brighter — plus, Costco delivers upbeat sales numbers

Continue Reading

Technology

Meta reportedly projected 10% of 2024 sales came from scam, fraud ads

Published

on

By

Meta reportedly projected 10% of 2024 sales came from scam, fraud ads

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during a dinner with tech leaders in the State Dining Room of the White House in Washington, DC, US, on Thursday, Sept. 4, 2025. US President Donald Trump said he would be imposing tariffs on semiconductor imports “very shortly” but spare goods from companies like Apple Inc. that have pledged to boost their US investments. Photographer: Will Oliver/EPA/Bloomberg via Getty Images

Will Oliver | Bloomberg | Getty Images

Meta projected that 10% of its overall sales in 2024, or about $16 billion, came from running online ads for scams and banned goods, according to a Thursday report from Reuters

Those kinds of ads included promotions for “fraudulent e-commerce and investment schemes, illegal online casinos and the sale of banned medical products,” according to the Reuters report, which was based on internal company documents. Those documents showed the company’s attempts to measure the prevalence of fraudulent advertising on its apps like Facebook and Instagram.

Meta brought in more than $164.5 billion in overall sales for 2024. Last week, the company said that third-quarter sales rose 26% year-over-year to $51.24 billion and that it lifted the low end of its total expenses for the year by $2 billion as part of its massive investments into artificial intelligence.

The Reuters report cited a December 2024 document that showed how Meta each year generates roughly $7 billion in annualized sales from so-called “higher risk” scam ads, which are promotions that are clearly deceptive. Each day, Meta shows users an estimated 15 billion of these higher risk scam ads, the Reuters report said, citing a separate document.

Although some of the documents show that Meta aims to reduce the amount of bogus ads on its platform, the Reuters report also said that other documents suggest the company is concerned that its business projections could be impacted by any abrupt removal of the fraudulent promotions.

A Meta spokesperson said that the company “aggressively” addresses scam and fraud ads on its apps. The projections that 10% of the company’s 2024 ad sales came from bunk ads “was a rough and overly-inclusive estimate rather than a definitive or final figure; in fact, subsequent review revealed that many of these ads weren’t violating at all,” the spokesperson said in a statement.

“Unfortunately, the leaked documents present a selective view that distorts Meta’s approach to fraud and scams by focusing on our efforts to assess the scale of the challenge, not the full range of actions we have taken to address the problem,” the spokesperson said.

WATCH: Wall Street backs AI winners, and Meta’s not one of them this quarter.

Wall Street backs AI winners, and Meta’s not one of them this quarter

Continue Reading

Trending