In April 2021, the European Commission presented its proposal for harmonized rules on artificial intelligence (AI), dubbed the Artificial Intelligence Act (AI Act). After the Council of the European Union and the European Parliament finalized their positions in December 2022 and June 2023, the legislative institutions entered a trilogue on the upcoming AI regulation.
The negotiations can be challenging due to the significant differences between the Parliament and the Council on specific issues such as biometric surveillance. In Germany, political groups and digital experts are also concerned about proposed changes to the AI Act.
Die Linke calls for stricter regulation and transparency
The German left party Die Linke highlighted significant gaps in European AI regulation, particularly regarding consumer protection, and obligations for AI providers and users.
It wants to require high-risk systems — including AI systems that pose a high risk to health, safety and the fundamental rights of natural persons — to be checked for compliance with the regulation by a supervisory authority before these AI systems are launched on the market. Die Linke has suggested that the German government appoint at least one national supervisory authority and provide sufficient financial resources to fulfill this task.
“Politics must ensure that a technology that is significant for everyone but controlled by only a few is supervised by a regulatory authority and proven trustworthy before its implementation,” said Petra Sitte, a politician from Die Linke, adding:
“Therefore, do not let yourself be blackmailed by lobbyists of big technology corporations. We can also strengthen an open-source approach in Europe […], meaning that a programming code is accessible to everyone.”
Die Linke also advocates an explicit ban on biometric identification and classification systems in public spaces, AI-driven election interference, and predictive policing systems.
According to the party, the exception for scientific AI systems specified in the AI Act should not apply if the system is used outside research institutions. Die Linke is already calling on the German government to develop training programs on the capabilities and limitations of AI systems, and to evaluate AI systems used in government operations annually “using a standardized risk classification model,” as well as registering them in an AI registry.
The Union prioritizes innovation and openness
Conversely, the center-right coalition of the Christian Democratic Union of Germany and the Christian Social Union in Bavaria — also known as “the Union” — emphasized that AI should not be overly regulated. It advocates for the federal government to prioritize AI and an innovation-friendly environment in Europe.
Regarding the trilogue negotiations, the Union noted its position paper, claiming that generative AI will enable German and European companies to excel internationally. The party wants to avoid the establishment of a large supervisory authority in Brussels, as well as differences in the implementation of the AI law in EU member states. While advocating for sharper definitions, it also suggests ensuring legal certainty by aligning with the General Data Protection Regulation, the Data Act and the Digital Markets Act.
The Union also makes concrete proposals to secure Germany’s technological sovereignty in AI. Recognizing the challenges of building an entirely new infrastructure in a realistic timeframe, the party recommends expanding the existing supercomputing infrastructure of the Gauss Center for Supercomputing. It also proposes that German and European startups, small- and medium-sized enterprises (SMEs), and open-source developers be given dedicated access to this infrastructure.
To encourage the growth of German AI startups, the Union suggested such small businesses be awarded government contracts.
In addition, the Union highlighted an investment gap in university spin-offs and open-source AI, and advocated for targeted support through national initiatives such as the Sovereign Tech Fund. Given the widespread use of AI in various educational institutions, organizations and companies, the Union highlighted the urgent need to establish local systems to prevent accidental information leakage.
The German AI Association requires practical solutions
The German AI Association (KI Bundesverband), Germany’s largest industry association for AI representing more than 400 innovative SMEs, startups and entrepreneurs, also advocates for openness to innovation.
It’s here! Our new position paper on the EU’s Artificial Intelligence Act (#AIAct) highlights the key issues that need to be addressed in the upcoming #trilogue negotiations. Thanks to all our contributors! ➡ https://t.co/kHR5cL5VJ0pic.twitter.com/MtbefMDlUO
“Europe must therefore be able to offer its own AI systems that can compete with their American or Chinese counterparts,” said Jörg Bienert, president of the KI Bundesverband. While the KI Bundesverband accepts the idea that a regulatory framework coupled with investment in AI can be a way to boost innovation, the association disagrees with the EU’s approach to this goal. Bienert believes any strategy must include three key components: mitigating potential risks, promoting domestic development, and protecting fundamental rights and European values.
According to Bienert, EU lawmakers have failed to create a regulatory framework focusing on real AI application threats and risks. He further stated that the AI Act risks becoming more of a regulation for advanced software rather than a risk-based approach. Introducing such extensive regulation after the dominance of United States and Chinese tech companies will hinder European AI companies’ chances of strengthening their position and create dependency on foreign technology.
“What is needed now are sensible and practical solutions to mitigate the real risks and threats posed by AI, not ideologically driven political quick fixes.”
Striking a balance
Germany’s government supports the AI Act but also sees further potential for improvements. Annika Einhorn, a spokesperson for the Federal Ministry for Economic Affairs and Climate Action, told Cointelegraph, “We attach importance to striking a balance between regulation and openness to innovation, particularly in the German and European AI landscape.” The federal government will also advocate for this in the trilogue negotiations on the AI Act.
In addition to the negotiations, the federal government is already implementing numerous measures to promote German AI companies, including establishing high-performance and internationally visible research structures and, in particular, providing state-of-the-art AI and computing infrastructure at an internationally competitive level. Furthermore, during the negotiations on the AI Act, the federal government continues to advocate for “an ambitious approach” to AI testbeds. This enables innovation while also meeting the requirements of the AI Act, according to Einhorn.
Is Europe being left behind?
All these suggestions and ideas may sound promising, but the fact is that most big AI models are being developed in the U.S. and China. In light of this trend, digital experts are concerned that the German and European digital economies may fall behind. While Europe possesses significant AI expertise, the availability of computing power hinders further development.
To examine how Germany could catch up in AI, the Ministry for Economic Affairs and Climate Action commissioned a feasibility study titled “Large AI Models for Germany.”
In the study, experts argue that if Germany cannot independently develop and provide this foundational technology, German industry will have to rely on foreign services, which presents challenges regarding data protection, data security and ethical use of AI models.
The market dominance of U.S. companies in search engines, social media and cloud servers exemplifies the difficulties that can arise regarding data security and regulation. To address these difficulties, the study proposes the establishment of an AI supercomputing infrastructure in Germany, allowing for the development of large AI models and providing computing resources to smaller companies. However, specific details regarding funding and implementation remain to be determined.
“AI made in Europe”
In AI, Europe’s reliance on software and services from non-European countries is steadily increasing. According to Holger Hoos, an Alexander von Humboldt professor for AI, this poses a threat to its sovereignty, as regulation alone cannot adequately address the issue. Hoos emphasized the need for a substantial shift in the German and European AI strategies, accompanied by significant targeted public investments in the European AI landscape.
A key aspect of this proposal is the creation of a globally recognized “CERN for AI.” This center would possess the necessary computational power, data resources and skilled personnel to facilitate cutting-edge AI research. Such a center could attract talent, foster activities and drive projects in the field of AI on a global scale, making a noteworthy contribution to the success of “AI made in Europe.” Hoos added:
“We are at a critical juncture. It requires a clear change of course, a bold effort to make AI made in Europe a success — a success that will profoundly impact our economy, society and future.”
Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.
The shutdown of the US government entered its 38th day on Friday, with the Senate set to vote on a funding bill that could temporarily restore operations.
According to the US Senate’s calendar of business on Friday, the chamber will consider a House of Representatives continuing resolution to fund the government. It’s unclear whether the bill will cross the 60-vote threshold needed to pass in the Senate after numerous failed attempts in the previous weeks.
Amid the shutdown, Republican and Democratic lawmakers have reportedly continued discussions on the digital asset market structure bill. The legislation, passed as the CLARITY Act in the House in July and referred to as the Responsible Financial Innovation Act in the Senate, is expected to provide a comprehensive regulatory framework for cryptocurrencies in the US.
Although members of Congress have continued to receive paychecks during the shutdown — unlike many agencies, where staff have been furloughed and others are working without pay — any legislation, including that related to crypto, seems to have taken a backseat to addressing the shutdown.
At the time of publication, it was unclear how much support Republicans may have gained from Democrats, who have held the line in demanding the extension of healthcare subsidies and reversing cuts from a July funding bill.
Is the Republicans’ timeline for the crypto bill still attainable?
Wyoming Senator Cynthia Lummis, one of the market structure bill’s most prominent advocates in Congress, said in August that Republicans planned to have the legislation through the Senate Banking Committee by the end of September, the Senate Agriculture Committee in October and signed into law by 2026.
Though reports suggested lawmakers on each committee were discussing terms for the bill, the timeline seemed less likely amid a government shutdown and the holidays approaching.
Japan’s financial regulator, the Financial Services Agency (FSA), endorsed a project by the country’s largest financial institutions to jointly issue yen-backed stablecoins.
In a Friday statement, the FSA announced the launch of its “Payment Innovation Project” as a response to progress in “the use of blockchain technology to enhance payments.” The initiative involves Mizuho Bank, Mitsubishi UFJ Bank, Sumitomo Mitsui Banking Corporation, Mitsubishi Corporation and its financial arm and Progmat, MUFG’s stablecoin issuance platform.
The announcement follows recent reports that those companies plan to modernize corporate settlements and reduce transaction costs through a yen-based stablecoin project built on MUFG’s stablecoin issuance platform Progmat. The institutions in question serve over 300,000 corporate clients.
The regulator noted that, starting this month, the companies will begin issuing payment stablecoins. The initiative aims to improve user convenience, enhance Japanese corporate productivity and innovate the local financial landscape.
The participating companies are expected to ensure that users are protected and informed about the systems they use. “After the completion of the pilot project, the FSA plans to publish the results and conclusions,” the announcement reads.
The announcement follows the Monday launch of Tokyo-based fintech firm JPYC’s Japan-first yen-backed stablecoin, along with a dedicated platform. The company’s president, Noriyoshi Okabe, said at the time that seven companies are already planning to incorporate the new stablecoin.
Recently, Japanese regulators have been hard at work setting new rules for the cryptocurrency industry. So much so that Bybit, the world’s second-largest crypto exchange by trading volume, announced it will pause new user registrations in the country as it adapts to the new conditions.
Local regulators seem to be opening up to the industry. Earlier this month, the FSA was reported to be preparing to review regulations that could allow banks to acquire and hold cryptocurrencies such as Bitcoin (BTC) for investment purposes.
At the same time, Japan’s securities regulator was also reported to be working on regulations to ban and punish crypto insider trading. Following the change, Japan’s Securities and Exchange Surveillance Commission would be authorized to investigate suspicious trading activity and impose fines on violators.
The European Union is considering a partial halt to its landmark artificial intelligence laws in response to pressure from the US government and Big Tech companies.
The European Commission plans to ease part of its digital rulebook, including the AI Act that took effect last year, as part of a “simplification package” that is to be decided on Nov. 19, the Financial Times reported on Friday.
If approved, the proposed halt could allow generative AI providers currently operating in the market a one-year compliance grace period and delay enforcement of fines for violations of AI transparency rules until August 2027.
“When it comes to potentially delaying the implementation of targeted parts of the AI Act, a reflection is still ongoing,” the commission’s Thomas Regnier told Cointelegraph, adding that the EC is working on the digital omnibus to present it on Nov. 19.
EU’s AI Act entered into force in August 2024
The commission proposed the first EU AI law in April 2021, with the mission of establishing a risk-based AI classification system.
Passed by the European Parliament and the European Council in 2023, the European AI Act entered into force in August 2024, with provisions expected to be implemented gradually over the next six to 36 months.
An excerpt from the EU AI Act’s implementation timeline. Source: ArtificialIntelligenceAct.eu
According to the FT, a bulk of the provisions for high-risk AI systems, which can pose “serious risks” to health, safety or citizens’ fundamental rights, are set to come into effect in August 2026.
With the draft “simplification” proposal, companies breaching the rules on the highest-risk AI use could reportedly receive a “grace period” of one year.
The proposal is still subject to informal discussions within the commission and with EU states and could still change ahead of its adoption on Nov. 19, the report noted.
“Various options are being considered, but no formal decision has been taken at this stage,” the EC’s Regnier told Cointelegraph, adding: “The commission will always remain fully behind the AI Act and its objectives.”
“AI is an incredibly disruptive technology, the full implications of which we are still only just beginning to fully appreciate,” Mercuryo co-founder and CEO Petr Kozyakov said, adding:
“Ultimately, Europe’s competitiveness will depend on its ability to set high standards without creating barriers that may risk letting innovation take place elsewhere.”
The EU’s potential suspension of parts of the AI Act underscores Brussels’ evolving approach to digital regulation amid intensifying global competition from the US and China.