Connect with us

Published

on

Jaque Silv | SOPA Images | Lightrocket | Getty Images

Denmark on Wednesday laid out a framework that can help EU member states use generative artificial intelligence in compliance with the European Union’s strict new AI Act — and Microsoft‘s already on board.

A government-backed alliance of major Danish corporates, led by IT consultancy Netcompany, launched the “Responsible Use of AI Assistants in the Public and Private Sector” white paper, a blueprint that sets out “best-practice examples” for how firms should use and support employees in deploying AI systems in a regulated environment.

The guide also aims to encourage delivery of “secure and reliable services” by businesses to consumers. Denmark’s Agency for Digital Government, the country’s central business registry CVR and pensions authority ATP are among the founding partners adopting the framework.

This includes guidelines governing how the public and private sector collaborate, deploying AI in society, complying with both the AI Act and General Data Protection Regulation (GDPR), mitigating risks and reducing bias, scaling AI implementation, storing data securely, and training up staff.

Netcompany CEO André Rogaczewski said the provisions laid out in the white paper were primarily aimed at companies in heavily regulated industries, such as in financial services. He told CNBC he’s aiming to address one core question: “How can we scale the responsible usage of AI?”

What is the EU AI Act?

The EU AI Act is a landmark law that aims to govern the way companies develop, use and apply AI. It came into force in August, after previously receiving final approval from EU member states, lawmakers, and the European Commission — the executive body of the EU — in May.

The law applies a risk-based approach to governing AI, meaning various applications of the technology are treated differently depending on the risk level they pose. It’s been touted as the world’s first major AI law that will give firms clarity under a harmonized, EU-wide regulatory framework.

Though the rules are technically in effect, implementation them is a lengthy process. Most of the provisions of the Act — including rules for general-purpose AI systems like OpenAI’s ChatGPT — won’t materialize until at least 2026, at the end of a two-year transition period.

“It is absolutely vital for the competitiveness of our businesses and future progress of Europe that both the private and public sector will succeed in developing and using AI in the years to come,” Caroline Stage Olsen, Denmark’s minister of digital affairs, told CNBC, calling the white paper a “helpful step” toward that goal.

Netcompany’s Rogaczewski told CNBC that pitched the idea for a white paper to some of Denmark’s biggest banks and insurance firms some months ago. He found that, though each organization was “experimenting” with AI, institutions lacked a “common standard” to get the most out of the tech.

Rogaczewski hopes the Danish white paper will also offer a blue print for other countries and businesses seeking to simplify compliance with the EU AI Act.

Microsoft’s decision to sign up to the guidelines is of particular note. “Getting Microsoft involved was important since generative AI solutions often involve algorithms and global tech,” said Rogaczewski, adding the tech giant’s involvement underlines how responsible digitization is possibility across borders.

The U.S. tech giant is a major backer of ChatGPT developer OpenA, which secured a $157 billion valuation this year. Microsoft also licenses OpenAI’s technology out to enterprise firms via its Azure cloud computing platform.

Continue Reading

Technology

Here are 4 major moments that drove the stock market last week

Published

on

By

Here are 4 major moments that drove the stock market last week

Continue Reading

Technology

Oracle says there have been ‘no delays’ in OpenAI arrangement after stock slide

Published

on

By

Oracle says there have been 'no delays' in OpenAI arrangement after stock slide

Oracle CEO Clay Magouyrk appears on a media tour of the Stargate AI data center in Abilene, Texas, on Sept. 23, 2025.

Kyle Grillot | Bloomberg | Getty Images

Oracle on Friday pushed back against a report that said the company will complete data centers for OpenAI, one of its major customers, in 2028, rather than 2027.

The delay is due to a shortage of labor and materials, according to the Friday report from Bloomberg, which cited unnamed people. Oracle shares fell to a session low of $185.98, down 6.5% from Thursday’s close.

“Site selection and delivery timelines were established in close coordination with OpenAI following execution of the agreement and were jointly agreed,” an Oracle spokesperson said in an email to CNBC. “There have been no delays to any sites required to meet our contractual commitments, and all milestones remain on track.”

The Oracle spokesperson did not specify a timeline for turning on cloud computing infrastructure for OpenAI. In September, OpenAI said it had a partnership with Oracle worth more than $300 billion over the next five years.

“We have a good relationship with OpenAI,” Clay Magouyrk, one of Oracle’s two newly appointed CEOs, said at an October analyst meeting.

Doing business with OpenAI is relatively new to 48-year-old Oracle. Historically, Oracle grew through sales of its database software and business applications. Its cloud infrastructure business now contributes over one-fourth of revenue, although Oracle remains a smaller hyperscaler than Amazon, Microsoft and Google.

OpenAI has also made commitments to other companies as it looks to meet expected capacity needs.

In September, Nvidia said it had signed a letter of intent with OpenAI to deploy at least 10 gigawatts of Nvidia equipment for the San Francisco artificial intelligence startup. The first phase of that project is expected in the second half of 2026.

Nvidia and OpenAI said in a September statement that they “look forward to finalizing the details of this new phase of strategic partnership in the coming weeks.”

But no announcement has come yet.

In a November filing, Nvidia said “there is no assurance that we will enter into definitive agreements with respect to the OpenAI opportunity.”

OpenAI has historically relied on Nvidia graphics processing units to operate ChatGPT and other products, and now it’s also looking at designing custom chips in a collaboration with Broadcom.

On Thursday, Broadcom CEO Hock Tan laid out a timeline for the OpenAI work, which was announced in October. Broadcom and OpenAI said they had signed a term sheet.

“It’s more like 2027, 2028, 2029, 10 gigawatts, that was the OpenAI discussion,” Tan said on Broadcom’s earnings call. “And that’s, I call it, an agreement, an alignment of where we’re headed with respect to a very respected and valued customer, OpenAI. But we do not expect much in 2026.”

OpenAI declined to comment.

WATCH: Oracle says there have been ‘no delays’ in OpenAI arrangement after stock slide

Oracle says there have been 'no delays' in OpenAI arrangement after stock slide

Continue Reading

Technology

AI order from Trump might be ‘illegal,’ Democrats and consumer advocacy groups claim

Published

on

By

AI order from Trump might be ‘illegal,’ Democrats and consumer advocacy groups claim

“This is the wrong approach — and most likely illegal,” Sen. Amy Klobuchar, D-Minn., said in a post on X Thursday.

“We need a strong federal safety standard, but we should not remove the few protections Americans currently have from the downsides of AI,” Klobuchar said.

Trump’s executive order directs Attorney General Pam Bondi to create a task force to challenge state laws regulating AI.

The Commerce Department was also directed to identify “onerous” state regulations aimed at AI.

The order is a win for tech companies such as OpenAI and Google and the venture firm Andreessen Horowitz, which have all lobbied against state regulations they view as burdensome. 

It follows a push by some Republicans in Congress to impose a moratorium on state AI laws. A recent plan to tack on that moratorium to the National Defense Authorization Act was scuttled.

Collin McCune, head of government affairs at Andreessen Horowitz, celebrated Trump’s order, calling it “an important first step” to boost American competition and innovation. But McCune urged Congress to codify a national AI framework.

“States have an important role in addressing harms and protecting people, but they can’t provide the long-term clarity or national direction that only Congress can deliver,” McCune said in a statement.

Sriram Krishnan, a White House AI advisor and former general partner at Andreessen Horowitz, during an interview Friday on CNBC’s “Squawk Box,” said that Trump is was looking to partner with Congress to pass such legislation.

“The White House is now taking a firm stance where we want to push back on ‘doomer’ laws that exist in a bunch of states around the country,” Krishnan said.

He also said that the goal of the executive order is to give the White House tools to go after state laws that it believes make America less competitive, such as recently passed legislation in Democratic-led states like California and Colorado.

The White House will not use the executive order to target state laws that protect the safety of children, Krishnan said.

Robert Weissman, co-president of the consumer advocacy group Public Citizen, called Trump’s order “mostly bluster” and said the president “cannot unilaterally preempt state law.”

“We expect the EO to be challenged in court and defeated,” Weissman said in a statement. “In the meantime, states should continue their efforts to protect their residents from the mounting dangers of unregulated AI.”

Weissman said about the order, “This reward to Big Tech is a disgraceful invitation to reckless behavior
by the world’s largest corporations and a complete override of the federalist principles that Trump and MAGA claim to venerate.”

In the short term, the order could affect a handful of states that have already passed legislation targeting AI. The order says that states whose laws are considered onerous could lose federal funding.

One Colorado law, set to take effect in June, will require AI developers to protect consumers from reasonably foreseeable risks of algorithmic discrimination.

Some say Trump’s order will have no real impact on that law or other state regulations.

“I’m pretty much ignoring it, because an executive order cannot tell a state what to do,” said Colorado state Rep. Brianna Titone, a Democrat who co-sponsored the anti-discrimination law.

In California, Gov. Gavin Newsom recently signed a law that, starting in January, will require major AI companies to publicly disclose their safety protocols. 

That law’s author, state Sen. Scott Wiener, said that Trump’s stated goal of having the United States dominate the AI sector is undercut by his recent moves. 

“Of course, he just authorized chip sales to China & Saudi Arabia: the exact opposite of ensuring U.S. dominance,” Wiener wrote in an X post on Thursday night. The Bay Area Democrat is seeking to succeed Speaker-emerita Nancy Pelosi in the U.S. House of Representatives.

Trump on Monday said he will Nvidia to sell its advanced H200 chips to “approved customers” in China, provided that U.S. gets a 25% cut of revenues.

Continue Reading

Trending