Google has been facing a wave of litigation recently as the implications of generative artificial intelligence (AI) on copyright and privacy rights become clearer.
Amid the ever-intensifying debate, Google has not only defended its AI training practices but also pledged to shield users of its generative AI products from accusations of copyright violations.
However, Google’s protective umbrella only spans seven specified products with generative AI attributes and conspicuously leaves out Google’s Bard search tool. The move, although a solace to some, opens a Pandora’s box of questions around accountability, the protection of creative rights and the burgeoning field of AI.
Moreover, the initiative is also being perceived as more than just a mere reactive measure from Google, but rather a meticulously crafted strategy to indemnify the blossoming AI landscape.
AI’s legal cloud
The surge of generative AI over the last couple of years has rekindled the age-old flame of copyright debates with a modern twist. The bone of contention currently pivots around whether the data used to train AI models and the output generated by them violate propriety intellectual property (IP) affiliated with private entities.
In this regard, the accusations against Google consist of just this and, if proven, could not only cost Google a lot of money but also set a precedent that could throttle the growth of generative AI as a whole.
Google’s legal strategy, meticulously designed to instill confidence among its clientele, stands on two primary pillars, i.e., the indemnification of its training data and its generated output. To elaborate, Google has committed to bearing legal responsibility should the data employed to devise its AI models face allegations of IP violations.
Not only that, but the tech giant is also looking to protect users against claims that the text, images or other content engendered by its AI services do not infringe upon anyone else’s personal data — encapsulating a wide array of its services, including Google Docs, Slides and Cloud Vertex AI.
Google has argued that the utilization of publicly available information for training AI systems is not tantamount to stealing, invasion of privacy or copyright infringement.
However, this assertion is under severe scrutiny as a slew of lawsuits accuse Google of misusing personal and copyrighted information to feed its AI models. One of the proposed class-action lawsuits even alleges that Google has built its entire AI prowess on the back of secretly purloined data from millions of internet users.
Therefore, the legal battle seems to be more than just a confrontation between Google and the aggrieved parties; it underlines a much larger ideological conundrum, namely: “Who truly owns the data on the internet? And to what extent can this data be used to train AI models, especially when these models churn out commercially lucrative outputs?”
An artist’s perspective
The dynamic between generative AI and protecting intellectual property rights is a landscape that seems to be evolving rapidly.
Nonfungible token artist Amitra Sethi told Cointelegraph that Google’s recent announcement is a significant and welcome development, adding:
“Google’s policy, which extends legal protection to users who may face copyright infringement claims due to AI-generated content, reflects a growing awareness of the potential challenges posed by AI in the creative field.”
However, Sethi believes that it is important to have a nuanced understanding of this policy. While it acts as a shield against unintentional infringement, it might not cover all possible scenarios. In her view, the protective efficacy of the policy could hinge on the unique circumstances of each case.
When an AI-generated piece loosely mirrors an artist’s original work, Sethi believes the policy might offer some recourse. But in instances of “intentional plagiarism through AI,” the legal scenario could get murkier. Therefore, she believes that it is up to the artists themselves to remain proactive in ensuring the full protection of their creative output.
Sethi said that she recently copyrighted her unique art genre, “SoundBYTE,” so as to highlight the importance of artists taking active measures to secure their work. “By registering my copyright, I’ve established a clear legal claim to my creative expressions, making it easier to assert my rights if they are ever challenged,” she added.
In the wake of such developments, the global artist community seems to be coming together to raise awareness and advocate for clearer laws and regulations governing AI-generated content.
Tools like Glaze and Nightshade have also appeared to protect artists’ creations. Glaze applies minor changes to artwork that, while practically imperceptible to the human eye, feeds incorrect or bad data to AI art generators. Similarly, Nightshade lets artists add invisible changes to the pixels within their pieces, thereby “poisoning the data” for AI scrapers.
Examples of how “poisoned” artworks can produce an incorrect image from an AI query. Source: MIT
Industry-wide implications
The existing narrative is not limited to Google and its product suite. Other tech majors like Microsoft and Adobe have also made overtures to protect their clients against similar copyright claims.
Microsoft, for instance, has put forth a robust defense strategy to shield users of its generative AI tool, Copilot. Since its launch, the company has staunchly defended the legality of Copilot’s training data and its generated information, asserting that the system merely serves as a means for developers to write new code in a more efficient fashion.
Adobe has incorporated guidelines within its AI tools to ensure users are not unwittingly embroiled in copyright disputes and is also offering AI services bundled with legal assurances against any external infringements.
The inevitable court cases that will appear regarding AI will undoubtedly shape not only legal frameworks but also the ethical foundations upon which future AI systems will operate.
Tomi Fyrqvist, co-founder and chief financial officer for decentralized social app Phaver, told Cointelegraph that in the coming years, it would not be surprising to see more lawsuits of this nature coming to the fore:
“There is always going to be someone suing someone. Most likely, there will be a lot of lawsuits that are opportunistic, but some will be legit.”
Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.
A charity has warned 25% of young children and pregnant women in Gaza are now malnourished, with Sir Keir Starmer vowing to evacuate children who need “critical medical assistance” to the UK.
MSF, also known as Doctors Without Borders, said Israel’s “deliberate use of starvation as a weapon” has reached unprecedented levels – with patients and healthcare workers both fighting to survive.
It claimed that, at one of its clinics in Gaza City, rates of severe malnutrition in children under five have trebled over the past two weeks – and described the lack of food and water on the ground as “unconscionable”.
Image: Pic: Reuters
The charity also criticised the high number of fatalities seen at aid distribution sites, with one British surgeon accusing IDF soldiers of shooting civilians “almost like a game of target practice”.
MSF’s deputy medical coordinator in Gaza, Dr Mohammed Abu Mughaisib, said: “Those who go to the Gaza Humanitarian Foundation’s food distributions know that they have the same chance of receiving a sack of flour as they do of leaving with a bullet in their head.”
The UN also estimates that Israeli forces have killed more than 1,000 people seeking food – the majority near the militarised distribution sites of the US-backed aid distribution scheme run by the GHF.
Please use Chrome browser for a more accessible video player
1:20
‘Many more deaths unless Israelis allow food in’
In a statement on Friday, the IDF had said it “categorically rejects the claims of intentional harm to civilians”, and reports of incidents at aid distribution sites were “under examination”.
The GHF has also previously disputed that these deaths were connected with its organisation’s operations, with director Johnnie Moore telling Sky News: “We just want to feed Gazans. That’s the only thing that we want to do.”
Israel says it has let enough food into Gaza and has accused the UN of failing to distribute it, in what the foreign ministry has labelled as “a deliberate ploy” to defame the country.
X
This content is provided by X, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable X cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to X cookies.
To view this content you can use the button below to allow X cookies for this session only.
In a video message posted on X late last night, Sir Keir Starmer condemned the scenes in Gaza as “appalling” and “unrelenting” – and said “the images of starvation and desperation are utterly horrifying”.
The prime minister added: “The denial of aid to children and babies is completely unjustifiable, just as the continued captivity of hostages is completely unjustifiable.
“Hundreds of civilians have been killed while seeking aid – children, killed, whilst collecting water. It is a humanitarian catastrophe, and it must end.”
Please use Chrome browser for a more accessible video player
2:10
Israeli military show aid waiting inside Gaza
Sir Keir confirmed that the British government is now “accelerating efforts” to evacuate children from Gaza who need critical medical assistance, so they can be brought to the UK for specialist treatment.
Israel has now said that foreign countries will be able to airdrop aid into Gaza. While the PM says the UK will now “do everything we can” to get supplies in via this route, he said this decision has come “far too late”.
Last year, the RAF dropped aid into Gaza, but humanitarian organisations warned it wasn’t enough and was potentially dangerous. In March 2024, five people were killed when an aid parachute failed and supplies fell on them.
The prime minister is instead demanding a ceasefire and “lasting peace” – and says he will only consider an independent state as part of a negotiated peace deal.