Connect with us

Published

on

The surge in generative artificial intelligence (AI) development has prompted governments globally to rush toward regulating the emerging technology. The trend matches the European Union’s efforts to implement the world’s first set of comprehensive rules for AI.

The EU AI Act is recognized as an innovative set of regulations. After several delays, reports indicate that on Dec. 7, negotiators agreed to a set of controls for generative AI tools such as OpenAI’s ChatGPT and Google’s Bard.

Concerns about the potential misuse of the technology have also propelled the United States, the United Kingdom, China and other G7 countries to speed up their work toward regulating AI.

In June, the Australian government announced an eight-week consultation to get feedback on whether “high-risk” AI tools should be banned. The consultation was extended until July 26. The government sought input on strategies to endorse the “safe and responsible use of AI,” exploring options such as voluntary measures like ethical frameworks, the necessity for specific regulations or a combination of both approaches.

Meanwhile, in temporary measures starting Aug. 15, China has introduced regulations to oversee the generative AI industry, mandating that service providers undergo security assessments and obtain clearance before introducing AI products to the mass market. After obtaining government approvals, four Chinese technology companies, including Baidu and SenseTime, unveiled their AI chatbots to the public on Aug. 31.

Related: How generative AI allows one architect to reimagine ancient cities

According to a Politico report, France’s privacy watchdog, the Commission Nationale Informatique & Libertés, or CNIL, said in March that it was investigating several complaints about ChatGPT after the chatbot was temporarily banned in Italy over a suspected breach of privacy rules, overlooking warnings from civil rights groups.

The Italian Data Protection Authority announced the launch of a “fact-finding” investigation on Nov. 22, which will examine data-gathering processes to train AI algorithms. The inquiry seeks to confirm the implementation of suitable security measures on public and private websites to hinder the “web scraping” of personal data utilized for AI training by third parties.

The U.S., the U.K., Australia and 15 other countries have recently released global guidelines to help protect AI models from being tampered with, urging companies to make their models “secure by design.”

Magazine: Real AI use cases in crypto: Crypto-based AI markets, and AI financial analysis