AI Titans Meta, Microsoft, Nvidia Collaborate With White House On Safety Standards

Published

on

Loading… Loading…

The Biden Administration is calling on a long list of tech companies and banking firms to tackle the dangers of artificial intelligence (AI).

Gina Raimondo, Secretary of Commerce, announced the creation of the U.S. AI Safety Institute Consortium (AISIC). The effort unites a whos who of AI developers with academics, researchers and civil society organizations.

The goal is to draft development and deployment guidelines, approaches to risk management and other safety issues.

President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That's precisely what this consortium is set up to help us do, Raimondo said.

Among the players involved are Meta Platforms Inc META , Microsoft Inc MSFT , Nvidia Corp NVDA , JPMorgan Chase JPM and Citigroup C .

The full list of consortium participants is available here.Key Measures Announced

Certain guidelines include the use of watermarks to identify to the public when audio and visual content is AI-generated.

So far, several companies have reported in recent weeks that they are currently stepping up AI regulatory efforts.

Facebook and Instagram owner Meta has said it will start labeling AI-generated images and videos on its platforms. Microsofts partner OpenAI has said it will launch a tool to identify AI content.

Also Read: Big Tech Must Protect Democracy: EU Drafts Misinformation Guidelines Ahead Of ElectionsEU Guidelines

Earlier this week, European Union officials began a process to compile guidelines for big tech companies to follow. The goal is to safeguard the democratic process from AI-generated misinformation or deepfake imagery that could negatively influence voters. After all, a third of the worlds population votes in national elections later this year.

The effort came just as Meta found itself under scrutiny due to an altered video of President Joe Biden appearing on Facebook.Loading… Loading…

Pop star Taylor Swift has also been the target of troubling deepfake videos.

Europe has already drafted a large document that caused something of a stir. Companies groaned at the likely compliance costs theyd face. Others fretted over whether more legislation would mean less innovation as developments in the startlingly fast-moving industry outpace regulators ability to legislate.

"Its exponential evolution means that regulations drafted now could be completely left behind by what they are trying to rein in long before they come into force," said Simon Portman and Mike Williams, intellectual property lawyers at Marks & Clerk.

In a forerunner to AISIC, the U.S.Cybersecurity and Infrastructure Security Agency(CISA) and the U.K.National Cyber Security Centre(NCSC) drew up an agreement signed by 18 countries on recommendations to keep artificial intelligence (AI) safe from hackers and other rogue actors who mightwield the technology irresponsibly

This agreements proposals centered on the principle of secure by design, meaning that all safety considerations should be taken into account at the development stage of new AI deployments and use cases.

Now Read: AI Regulation: Did The EU Just Deliver The Future Of AI Directly Into The Hands Of The US?

Image: ShutterstockLoading… Loading…

Trending

Exit mobile version