LISBON, Portugal — Tech giants are increasingly investing in the development of so-called “sovereign” artificial intelligence models as they seek to boost competitiveness by focusing more on local infrastructure.
Data sovereignty refers to the idea that people’s data should be stored on infrastructure within the country or continent they reside in.
“Sovereign AI is a relatively new term that’s emerged in the last year or so,” Chris Gow, IT networking giant Cisco’s Brussels-based EU public policy lead, told CNBC.
Currently, many of the biggest large language models (LLMs), like OpenAI’s ChatGPT and Anthropic’s Claude, use data centers based in the U.S. to store data and process requests via the cloud.
This has led to concern from politicians and regulators in Europe, who see dependence on U.S. technology as harmful to the continent’s competitiveness — and, more worryingly, technological resilience.
Where did ‘AI sovereignty’ come from?
The notion of data and technological sovereignty is something that has previously been on Europe’s agenda. It came about, in part, as a result of businesses reacting to new regulations.
The European Union’s General Data Protection Regulation, for example, requires companies to handle user data in a secure, compliant way that respects their right to privacy. High-profile cases in the EU have also raised doubts over whether data on European citizens can be transferred across borders safely.
The European Court of Justice in 2020 invalidated an EU-U.S. data-sharing framework, on the grounds that the pact did not afford the same level of protection as guaranteed within the EU by the General Data Protection Regulation (GDPR). Last year the EU-U.S. Data Privacy Framework was formed to ensure that data can flow safely between the EU and U.S.
These political development have ultimately resulted in a push toward localization of cloud infrastructure, where data is stored and processed for many online services.
Filippo Sanesi, global head of marketing and operations at OVHCloud, said the French cloud firm is seeing lots of demand for its European-located infrastructure, as they “understand the value of having their data in Europe, which are subject to European legislation.”
“As this concept of data sovereignty becomes more mature and people understand what it means, we see more and more companies understanding the importance of having your data locally and under a specific jurisdiction and governance,” Sanesi told CNBC. “We have a lot of data,” he added. “This data is sovereign in specific countries, under specific regulations.”
“Now, with this data, you can actually make products and services for AI, and those services should then be sovereign, should be controlled, deployed and developed locally by local talent for the local population or businesses.”
The AI sovereignty push hasn’t been driven forward by regulators — at least, not yet, according to Cisco’s Gow. Rather, it’s come from private companies, which are opening more data centers — facilities containing vast amounts of computing equipment to enable cloud-based AI tools — in Europe, he said.
Sovereign AI is “more driven by the industry naming it that, than it is from the policymakers’ side,” Gow said. “You don’t see the ‘AI sovereignty’ terminology used on the regulator side yet.”
Countries are pushing the idea of AI sovereignty because they recognize AI is “the future” and a “massively strategic technology,” Gow said.
Governments are focusing on boosting their domestic tech companies and ecosystems, as well as the all-important backend infrastructure that enables AI services.
“The AI workload uses 20 times the bandwidth of a traditional workload,” Gow said. It’s also about enabling the workforce, according to Gow, as firms need skilled workers to be successful.
Most important of all, however, is the data. “What you’re seeing is quite a few attempts from that side to think about training LLMs on localized data, in language,” Gow said.
The aim of the Italia project is to store results in a given jurisdiction and rely on data from citizens within that region so that results produced by the AI systems there are more grounded in local languages, culture and history.
“Sovereign AI is about reflecting the values of an organization or, equally, the country that you’re in and the values and the language,” David Hogan, EMEA head of enterprise sales for chipmaking giant Nvidia, told CNBC.
“The core challenge is that most of the frontier models today have been trained primarily on Western data generally,” Hogan added.
In Denmark for example, where Nvidia has a major presence, officials are concerned about vital services such as health care and telecoms being delivered by AI systems that aren’t “reflective” of local Danish culture and values, according to Hogan.
On Wednesday, Denmark laid out a landmark white paper outlining how companies can use AI in compliance with the incoming EU AI Act — the world’s first major AI law. The document is meant to serve as a blueprint for other EU nations to follow and adopt.
“If you’re in a European country that’s not one of the major language countries that’s spoken internationally, probably less than 2% of the data is trained on your language — let alone your culture,” Hogan said.
How regulation fueled a mindset shift
That’s not to say regulations haven’t proven an important factor in getting tech giants to think more about building localized AI infrastructure within Europe.
OVHCloud’s Sanesi said regulations like the EU’s GDPR catalyzed a lot of the interest in onshoring the processing of data in a given region.
The concept of AI sovereignty is also getting buy-in from local European tech firms.
Earlier this week, Berlin-headquartered search engine Ecosia and its Paris-based peer Qwant announced a joint venture to develop a European search index from scratch, aiming to serve improved French and German language results.
Meanwhile, French telecom operator Orange has said it’s in discussions with a number of foundational AI model companies about building a smartphone-based “sovereign AI” model for its customers that more accurately reflects their own language and culture.
“It wouldn’t make sense to build our own LLMs. So there’s a lot of discussion right now about, how do we partner with existing providers to make it more local and safer?” Bruno Zerbib, Orange’s chief technology officer, told CNBC.
“There are a lot of use cases where [AI data] can be processed locally [on a phone] instead of processed on the cloud,” Zerbib added. Orange hasn’t yet selected a partner for these sovereign AI model ambitions.
Tesla is facing a federal investigation into possible safety defects with FSD, its partially automated driving system that is also known as Full Self-Driving (Supervised).
Media, vehicle owner and other incident reports to the National Highway Traffic Safety Administration showed that in 44 separate incidents, Tesla drivers using FSD said the system caused them to run a red light, steer into oncoming traffic or commit other traffic safety violations leading to collisions, including some that injured people.
In a notice posted to the agency’s website on Thursday, NHTSA said the investigation concerns “all Tesla vehicles that have been equipped with FSD (Supervised) or FSD (Beta),” which is an estimated 2,882,566 of the company’s electric cars.
Tesla cars, even with FSD engaged, require a human driver ready to brake or steer at any time.
The NHTSA Office of Defects Investigation opened a Preliminary Evaluation to “assess whether there was prior warning or adequate time for the driver to respond to the unexpected behavior” by Tesla’s FSD, or “to safely supervise the automated driving task,” among other things.
Read more CNBC tech news
The ODI’s review will also assess “warnings to the driver about the system’s impending behavior; the time given to drivers to respond; the capability of FSD to detect, display to the driver, and respond appropriately to traffic signals; and the capability of FSD to detect and respond to lane markings and wrong-way signage.”
Tesla did not respond to a request for comment on the new federal probe. The company released an updated version of FSD this week, version 14.1, to customers.
For years, Tesla CEO Elon Musk has promised investors that Tesla would someday be able to turn their existing electric vehicles into robotaxis, capable of generating income for owners while they sleep or go on vacation, with a simple software update.
That hasn’t happened yet, and Tesla has since informed owners that future upgrades will require new hardware as well as software releases.
Tesla is testing a Robotaxi-brand ride-hailing service in Texas and elsewhere, but it includes human safety drivers or valets on board who either conduct the drives or manually intervene as needed.
In February this year, Musk and President Donald Trump slashed NHTSA staff as part of a broader effort to reduce the federal workforce, impacting the agency’s ability to investigate vehicle safety and regulate autonomous vehicles, The Washington Post first reported.
Commander Jared Isaacman of Polaris Dawn, a private human spaceflight mission, speaks at a press conference at the Kennedy Space Center in Cape Canaveral, Florida, U.S. August 19, 2024.
Isaacman, who has close ties with SpaceX CEO Elon Musk, was at the White House in September for Trump’s dinner for tech power players. Musk did not attend.
Trump and Isaacman have had multiple in-person meetings in recent weeks to talk about the Shift4 founder’s vision for the space program, according to Bloomberg, citing a person familiar with the meetings.
After a fiery back-and-forth between Musk and Trump over government spending, the president pulled Isaacman’s nomination for the post, saying he was a “blue blooded Democrat, who had never contributed to a Republican before.”
“I also thought it inappropriate that a very close friend of Elon, who was in the Space Business, run NASA, when NASA is such a big part of Elon’s corporate life,” Trump wrote in a Truth Social post on June 6.
Trump named Transportation Secretary Sean Duffy interim head of NASA in July.
Isaacman, who declined to comment, was initially nominated in December to lead the space agency.
Isaacman is a seasoned space traveller, having led two private spaceflights with SpaceX in 2021 and 2024. Shift4 has invested $27.5 million in SpaceX, according to a 2021 filing.
Read more CNBC tech news
Isaacman stepped down as CEO from Shift4, the payments company he founded in 1999 at the age of 16, after his nomination was pulled, and now serves as executive chairman.
“Even knowing the outcome, I would do it all over again,” Isaacman wrote about the NASA nomination process in a letter to investors announcing the Shift4 change.
Now, it looks like he gets to do it all over again.
Tensions between Musk and Trump have cooled in the months since, but big challenges face the U.S. space program..
Trump has proposed cutting more than $6 billion from NASA’s budget.
As a result of Trump’s Department of Government Efficiency initiative, which Musk led in the first half of 2025, around 4,000 NASA employees took deferred resignation program offers, cutting the space agency’s staff of 18,000 by about one-fifth.
During the October government shutdown, NASA has made exceptions that allow employees to keep working on missions involving Musk’s SpaceX and Jeff Bezos’ Blue Origin.
An illustration photo shows Sora 2 logo on a smartphone.
Cfoto | Future Publishing | Getty Images
The Creative Artists Agency on Thursday slammed OpenAI’s new video creation app Sora for posing “significant risks” to their clients and intellectual property.
The talent agency, which represents artists including Doja Cat, Scarlett Johanson, and Tom Hanks, questioned whether OpenAI believed that “humans, writers, artists, actors, directors, producers, musicians, and athletes deserve to be compensated and credited for the work they create.”
“Or does Open AI believe they can just steal it, disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work? In our opinion, the answer to this question is obvious,” the CAA wrote.
OpenAI did not immediately respond to CNBC’s request for comment.
The CAA said that it was “open to hearing” solutions from OpenAI and is working with IP leaders, unions, legislators and global policymakers on the matter.
“Control, permission for use, and compensation is a fundamental right of these workers,” the CAA wrote. “Anything less than the protection of creators and their rights is unacceptable.”
Sora, which launched last week and has quickly reached 1 million downloads, allows users to create AI-generated clips often featuring popular characters and brands.
Read more CNBC tech news
OpenAI launched with an “opt-out” system, which allowed the use of copyrighted material unless studios or agencies requested that their IP not be used.
CEO Sam Altman later said in a blog post that they would give rightsholders “more granular control over generation of characters.”
Talent agency WME sent a memo to agents on Wednesday that it has “notified OpenAI that all WME clients be opted out of the latest Sora AI update, regardless of whether IP rights holders have opted out IP our clients are associated with,” the LA Times reported.
United Talent Agency also criticized Sora’s use of copyrighted property as “exploitation, not innovation,” in a statement on Thursday.
“There is no substitute for human talent in our business, and we will continue to fight tirelessly for our clients to ensure that they are protected,” UTA wrote. “When it comes to OpenAI’s Sora or any other platform that seeks to profit from our clients’ intellectual property and likeness, we stand with artists.”
In a letter written to OpenAI last week, Disney said it did not authorize OpenAI and Sora to copy, distribute, publicly display or perform any image or video that features its copyrighted works and characters, according to a person familiar with the matter.
Disney also wrote that it did not have an obligation to “opt-out” of appearing in Sora or any OpenAI system to preserve its rights under copyright law, the person said.
The Motion Picture Association issued a statement on Tuesday, urging OpenAI to take “immediate and decisive action” against videos using Sora to produce content infringing on its copyrighted material.
Entertainment companies have expressed numerous copyright concerns as generative AI has surged.
Universal and Disney sued creator Midjourney in June, alleging that the company used and distributed AI-generated characters from their movies despite requests to stop. Disney also sent a cease-and-desist letter to AI startup Character.AI in September, warning the company to stop using its copyrighted characters without authorization.