In April 2021, the European Commission presented its proposal for harmonized rules on artificial intelligence (AI), dubbed the Artificial Intelligence Act (AI Act). After the Council of the European Union and the European Parliament finalized their positions in December 2022 and June 2023, the legislative institutions entered a trilogue on the upcoming AI regulation.
The negotiations can be challenging due to the significant differences between the Parliament and the Council on specific issues such as biometric surveillance. In Germany, political groups and digital experts are also concerned about proposed changes to the AI Act.
Die Linke calls for stricter regulation and transparency
The German left party Die Linke highlighted significant gaps in European AI regulation, particularly regarding consumer protection, and obligations for AI providers and users.
It wants to require high-risk systems — including AI systems that pose a high risk to health, safety and the fundamental rights of natural persons — to be checked for compliance with the regulation by a supervisory authority before these AI systems are launched on the market. Die Linke has suggested that the German government appoint at least one national supervisory authority and provide sufficient financial resources to fulfill this task.
“Politics must ensure that a technology that is significant for everyone but controlled by only a few is supervised by a regulatory authority and proven trustworthy before its implementation,” said Petra Sitte, a politician from Die Linke, adding:
“Therefore, do not let yourself be blackmailed by lobbyists of big technology corporations. We can also strengthen an open-source approach in Europe […], meaning that a programming code is accessible to everyone.”
Die Linke also advocates an explicit ban on biometric identification and classification systems in public spaces, AI-driven election interference, and predictive policing systems.
According to the party, the exception for scientific AI systems specified in the AI Act should not apply if the system is used outside research institutions. Die Linke is already calling on the German government to develop training programs on the capabilities and limitations of AI systems, and to evaluate AI systems used in government operations annually “using a standardized risk classification model,” as well as registering them in an AI registry.
The Union prioritizes innovation and openness
Conversely, the center-right coalition of the Christian Democratic Union of Germany and the Christian Social Union in Bavaria — also known as “the Union” — emphasized that AI should not be overly regulated. It advocates for the federal government to prioritize AI and an innovation-friendly environment in Europe.
Regarding the trilogue negotiations, the Union noted its position paper, claiming that generative AI will enable German and European companies to excel internationally. The party wants to avoid the establishment of a large supervisory authority in Brussels, as well as differences in the implementation of the AI law in EU member states. While advocating for sharper definitions, it also suggests ensuring legal certainty by aligning with the General Data Protection Regulation, the Data Act and the Digital Markets Act.
The Union also makes concrete proposals to secure Germany’s technological sovereignty in AI. Recognizing the challenges of building an entirely new infrastructure in a realistic timeframe, the party recommends expanding the existing supercomputing infrastructure of the Gauss Center for Supercomputing. It also proposes that German and European startups, small- and medium-sized enterprises (SMEs), and open-source developers be given dedicated access to this infrastructure.
To encourage the growth of German AI startups, the Union suggested such small businesses be awarded government contracts.
In addition, the Union highlighted an investment gap in university spin-offs and open-source AI, and advocated for targeted support through national initiatives such as the Sovereign Tech Fund. Given the widespread use of AI in various educational institutions, organizations and companies, the Union highlighted the urgent need to establish local systems to prevent accidental information leakage.
The German AI Association requires practical solutions
The German AI Association (KI Bundesverband), Germany’s largest industry association for AI representing more than 400 innovative SMEs, startups and entrepreneurs, also advocates for openness to innovation.
It’s here! Our new position paper on the EU’s Artificial Intelligence Act (#AIAct) highlights the key issues that need to be addressed in the upcoming #trilogue negotiations. Thanks to all our contributors! ➡ https://t.co/kHR5cL5VJ0pic.twitter.com/MtbefMDlUO
“Europe must therefore be able to offer its own AI systems that can compete with their American or Chinese counterparts,” said Jörg Bienert, president of the KI Bundesverband. While the KI Bundesverband accepts the idea that a regulatory framework coupled with investment in AI can be a way to boost innovation, the association disagrees with the EU’s approach to this goal. Bienert believes any strategy must include three key components: mitigating potential risks, promoting domestic development, and protecting fundamental rights and European values.
According to Bienert, EU lawmakers have failed to create a regulatory framework focusing on real AI application threats and risks. He further stated that the AI Act risks becoming more of a regulation for advanced software rather than a risk-based approach. Introducing such extensive regulation after the dominance of United States and Chinese tech companies will hinder European AI companies’ chances of strengthening their position and create dependency on foreign technology.
“What is needed now are sensible and practical solutions to mitigate the real risks and threats posed by AI, not ideologically driven political quick fixes.”
Striking a balance
Germany’s government supports the AI Act but also sees further potential for improvements. Annika Einhorn, a spokesperson for the Federal Ministry for Economic Affairs and Climate Action, told Cointelegraph, “We attach importance to striking a balance between regulation and openness to innovation, particularly in the German and European AI landscape.” The federal government will also advocate for this in the trilogue negotiations on the AI Act.
In addition to the negotiations, the federal government is already implementing numerous measures to promote German AI companies, including establishing high-performance and internationally visible research structures and, in particular, providing state-of-the-art AI and computing infrastructure at an internationally competitive level. Furthermore, during the negotiations on the AI Act, the federal government continues to advocate for “an ambitious approach” to AI testbeds. This enables innovation while also meeting the requirements of the AI Act, according to Einhorn.
Is Europe being left behind?
All these suggestions and ideas may sound promising, but the fact is that most big AI models are being developed in the U.S. and China. In light of this trend, digital experts are concerned that the German and European digital economies may fall behind. While Europe possesses significant AI expertise, the availability of computing power hinders further development.
To examine how Germany could catch up in AI, the Ministry for Economic Affairs and Climate Action commissioned a feasibility study titled “Large AI Models for Germany.”
In the study, experts argue that if Germany cannot independently develop and provide this foundational technology, German industry will have to rely on foreign services, which presents challenges regarding data protection, data security and ethical use of AI models.
The market dominance of U.S. companies in search engines, social media and cloud servers exemplifies the difficulties that can arise regarding data security and regulation. To address these difficulties, the study proposes the establishment of an AI supercomputing infrastructure in Germany, allowing for the development of large AI models and providing computing resources to smaller companies. However, specific details regarding funding and implementation remain to be determined.
“AI made in Europe”
In AI, Europe’s reliance on software and services from non-European countries is steadily increasing. According to Holger Hoos, an Alexander von Humboldt professor for AI, this poses a threat to its sovereignty, as regulation alone cannot adequately address the issue. Hoos emphasized the need for a substantial shift in the German and European AI strategies, accompanied by significant targeted public investments in the European AI landscape.
A key aspect of this proposal is the creation of a globally recognized “CERN for AI.” This center would possess the necessary computational power, data resources and skilled personnel to facilitate cutting-edge AI research. Such a center could attract talent, foster activities and drive projects in the field of AI on a global scale, making a noteworthy contribution to the success of “AI made in Europe.” Hoos added:
“We are at a critical juncture. It requires a clear change of course, a bold effort to make AI made in Europe a success — a success that will profoundly impact our economy, society and future.”
Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.
The Metropolitan Police has launched an investigation into suspended Reform MP Rupert Lowe.
It comes after the party revealed they had referred him to police and stripped him of the whip on Friday, alleging he made “verbal threats” against chairman Zia Yousaf – which Mr Lowe denies.
A spokesperson for the Met told Sky News they have now launched an investigation “into an allegation of a series of verbal threats made by a 67-year-old man”.
They added: “Our original statement referred to alleged threats made in December 2024. We would like to clarify that when this matter was reported to us, it referred to a series of alleged threats made between December 2024 and February 2025.
“Further enquiries are ongoing at this stage.”
In response to the update, Mr Lowe said he was unaware of the specific allegations but denied wrongdoing.
More from Politics
“I have instructed lawyers to represent me in this matter,” he said.
“My lawyers have made contact with the Met Police, and have made them aware of my willingness to co-operate in any necessary investigation.
“My lawyers have not yet received any contact from the police. It is highly unusual for the police to disclose anything to the media at this stage of an investigation.
“I remain unaware of the specific allegations, but in any event, I deny any wrongdoing.
“The allegations are entirely untrue.”
Why was Rupert Lowe suspended?
In a statement on Friday, Reform claimed it had received evidence from staff of “derogatory and discriminating remarks made about women” by Mr Lowe, 67, who was elected to his Great Yarmouth seat last year.
Please use Chrome browser for a more accessible video player
1:49
Reform UK row: Who said what?
The statement also claimed Mr Lowe had “on at least two occasions made threats of physical violence” against Mr Yusuf and “accordingly, this matter is with the police”.
Mr Lowe denied the claims, describing them as “vexatious” and said it was “no surprise” that it had come a day after he raised “reasonable and constructive questions” about Reform leader Nigel Farage.
In an interview with the Daily Mail on Thursday, Mr Lowe had said Reform remains a “protest party led by the Messiah” under the Clacton MP.
Asked whether he thought the former UKIP leader had the potential to become prime minister, as his supporters have suggested, Mr Lowe said: “It’s too early to know whether Nigel will deliver the goods. He can only deliver if he surrounds himself with the right people.”
He also claimed that he was “barely six months into being an MP” himself and “in the betting to be the next prime minister”.
War of words escalates
Those words could have struck a nerve with Mr Farage after Elon Musk, the Tesla and Space X billionaire who has become one of Donald Trump’s closest allies, suggested the Reform leader “doesn’t have what it takes” and that Mr Lowe should take over.
The pair launched bitter personal attacks on each other in articles for the Sunday Telegraph, with Mr Farage accusing Mr Lowe of falling out with all his fellow Reform MPs due to “outbursts” and “inappropriate” language.
He also quoted Labour minister Mike Kane, who said after a confrontation with Mr Lowe in the Commons that his anger “showed a man not in charge of his own faculties”.
In his article, Mr Lowe repeated his claim there is no credible evidence against him, said he was the victim of a “witch hunt” and the Reform UK leadership was unable even to accept the most mild constructive criticism.
US lawmakers are set for a heated debate on stablecoin regulation, with key industry leaders expected to outline their vision for the future of digital asset oversight.
Charles Cascarilla, co-founder and CEO of stablecoin issuer Paxos, is scheduled to testify before the House Financial Services Committee, where he will urge lawmakers to establish “cross-jurisdictional reciprocity” in stablecoin regulations.
In his prepared testimony, Cascarilla flagged concerns about the existing hurdles in the adoption of Paxos’ Global Dollar (USDG) stablecoin due to it being issued via a regulated affiliate in Singapore.
“We fear that products like Paxos’ Global Dollar (USDG) stablecoin, issued by a regulated affiliate in Singapore, will languish while departments and agencies make their determinations,” Cascarilla wrote in his speech.
US must act to prevent regulatory stablecoin arbitrage
Cascarilla recommended US lawmakers strengthen the current “international reciprocity language” to include clearly defined, accelerated timelines for the US Treasury Department to designate overseas jurisdictions for stablecoin regulation.
“This timeframe would force swift action and prevent bureaucratic delays while guaranteeing thorough scrutiny of foreign regulatory regimes,” the executive said.
Cascarilla emphasized that potential delays in applying such action would be a major hurdle in the adoption and distribution of stablecoins like USDG in the US as well as cross-border operations.
“Reciprocity is not about lowering standards — it’s about raising them globally,” Cascarilla said, adding:
“By establishing a framework to recognize jurisdictions with comparable regulatory regimes — covering reserve requirements, AML measures and cybersecurity protocols — the United States can prevent regulatory arbitrage, where issuers exploit lax oversight abroad.”
Paxos stablecoins were deemed non-compliant in the EU
Cascarilla’s remarks come amid some Paxos-issued stablecoins facing compliance issues in the European Union following the enforcement of its crypto regulation framework, Markets in Crypto-Assets (MiCA).
Since the MiCA framework went into full force in December 2024, multiple crypto asset service providers in the EU — including Crypto.com and Coinbase — have announced the delistings of Paxos stablecoins, including Pax Dollar (PAX) and Pax Gold (PAXG).
While Paxos’ Cascarilla is now calling for the US to take urgent action in forcing a global framework for stablecoin issuers that are regulated outside of the US, some industry CEOs have urged all stablecoin firms to get regulated domestically instead.
“Whether you are an offshore company or based in Hong Kong, if you want to offer your US dollar stablecoin in the US, you should register in the US just like we have to go register everywhere else.”
The X account of Meteora co-founder Ben Chow was reported to have been hacked after it posted a tweet reigniting the controversy around the launch of the Libra (LIBRA), Melania Meme (MELANIA) and Official Trump (TRUMP) memecoin tokens that ultimately led to his resignation.
On March 11, Chow’s X account posted an “official statement” about his departure from Meteora. The post called out DefiTuna founders Vlad Pozniakov and Dhirk, claiming the duo’s sole intention was to extract the maximum funds possible from various memecoin token launches, including MELANIA, Mates (MATES) and a Raydium launch.
“As a long time Solana builder, the reason I stepped down is because I am far too trusting for how parasitic the memecoin space is.”
Source: Ben Chow (Deleted post)
The controversial memecoin plot thickens for Meteora
However, Meteora’s official X account flagged the post as fraudulent, claiming that Chow’s X account was compromised and urged users to refrain from clicking on any links.
Chow did not respond to Cointelegraph’s request for comment. The fraudulent tweet has since been deleted after the account was recovered by Meteora.
Chow’s message contained alleged screenshots of WhatsApp conversations between Kelsier Ventures CEO Hayden Davis, Kelsier Ventures’ chief operating officer Gideon Davis, and Pozniakov discussing the MATES token, where one was quoted saying: “Yeah fellas tbh we are trying to max extract on this one.”
The legitimacy of the conversations could not be verified.
Implications of memecoin speculation in Argentine politics
Argentine President Javier Milei is facing calls for impeachment after endorsing a Solana-native LIBRA token. Milei’s endorsement caused the token’s value to surge from near zero to $5, briefly reaching a $4 billion market capitalization.
Milei dismissed rug pull allegations, claiming that he regularly promotes business projects as part of his free-market philosophy. His endorsement of the KIP Protocol, the developers behind LIBRA, was a part of the broader policy.