Connect with us

Published

on

The former deputy PM – now an executive at the company that owns Facebook and Instagram – says the lines between human and “synthetic content” is becoming “blurred” – as the firm said it planned to label all AI images on its platforms.

Meta, which also owns the Threads social media site, has already been placing “Imagined with AI” labels on photorealistic images created using its own Meta AI feature.

The tech giant said it is now building “industry-leading tools” that will allow it to identify invisible markers on images generated by artificial intelligence that have come from other sites such as Google, OpenAI, Microsoft or Adobe.

Meta has said it will roll out the labelling on Facebook, Instagram and Threads in the coming months.

Sir Nick Clegg, who is now Meta’s president of global affairs, wrote in a statement that the move comes during a year when a “number of important elections are taking place around the world”.

He added: “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.”

Sir Nick said the move is important at a time when “the difference between human and synthetic content” is becoming “blurred”.

Meta says it has been working with “industry partners on common technical standards for identifying AI content”, adding that it will be able to label AI-generated images when its technology detects “industry standard indicators”.

The company says the labels will come in “all languages”.

Why has Meta decided now, to announce a big shift in its efforts to get to grips with AI generated images and video?



Tom Clarke

Science and technology editor

@t0mclark3

Well first, it’s become impossible to ignore.

By one recent estimate, since 2022 alone 15 billion images have been generated by AI and uploaded to the internet. Like much of the content online, most of them fit into the harmless, even silly cute kitten, sci-fi, anime variety.

But a large number are harmful. Things like fake explicit images of public or private individuals uploaded without their consent, or politically motivated misinformation designed to manipulate the truth.

But the other reason for the reaction is companies like Meta know they are going to be forced to do something about it.

The UK passed the Online Safety Act last year which makes uploading fake explicit images of a person without their consent a crime. Lawmakers in the US last week told social media bosses that they were failing in their duty to keep users safe online and that laws to compel them to do more were now the only course of action.

Will Meta’s announcement make a difference? Yes, in that it will likely compel their rivals to follow suit and certainly will help make it clearer what images are AI generated and which aren’t.

But several research teams have shown that digital watermarking – even watermarks buried in the metadata of an image – can be removed with little expertise. Even Meta admits the technology isn’t perfect.

The real test will be whether we see, in the coming months, a decrease in the explosion of harmful fake images appearing online. And that’s probably going to be easier said than done.

While a superstar like Taylor Swift might be able to pressure Big Tech into taking down illegal images of her – the same can’t be said for the 3.5 billion users of one Meta platform or other.

If that doesn’t happen, the next test will be whether we see large and powerful tech companies in court over the issue. Some predict only hitting big tech in their pockets will really bring about change.

Sir Nick has said it’s not yet possible for Meta to identify all AI-generated content – with those who produce the images able to strip out invisible markers.

He added: “We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks.”

Sir Nick said this part of Meta’s work is important because the use of AI is “likely to become an increasingly adversarial space in the years ahead”.

An AI-generated image of Elon Musk. Pic: Full Fact
Image:
An AI-generated image of Elon Musk. Pic: Full Fact

“People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead,” he said.

Meta also plans to add a feature to its platform that will allow people to disclose when they are sharing AI-generated content so the company can add a label to it.

Read more:
Meta boss grilled over child exploitation concerns

Facebook turns 20: From Zuckerberg’s dormitory to a $1trn company
Eight AI-generated images that have caught people out

A fake AI-generated image of Julian Assange in prison. Pic: Full Fact
Image:
A fake AI-generated image of Julian Assange in prison. Pic: Full Fact

Taylor Swift targeted in AI images

AI images have proven controversial in recent months – with many of them so realistic users are often unable to tell they are not real.

In January, deepfake images of pop superstar Taylor Swift, which were believed to have been made using AI, were spread widely on social media.

Please use Chrome browser for a more accessible video player

Swift deepfake: White House ‘alarmed’

US President Biden’s spokesperson said the sexually explicit images of the star were “very alarming”.

White House Press Secretary Karine Jean-Pierre said social media companies have “an important role to play in enforcing their own rules”, as she urged Congress to legislate on the issue.

A royal reunion that was not all it seemed

In the UK, a slideshow of eight images appearing to show Prince William and Prince Harry at the King’s coronation spread widely on Facebook in 2023, with more than 78,000 likes.

One of them showed a seemingly emotional embrace between William and Harry after reports of a rift between the brothers.

However, none of the eight images were genuine.

Meanwhile, an AI-generated mugshot of Donald Trump when he was formally booked on 13 election fraud charges fooled many people around the world in 2023.

Continue Reading

Business

Elon Musk sues OpenAI and Sam Altman, saying company putting profit over the public good

Published

on

By

Elon Musk sues OpenAI and Sam Altman, saying company putting profit over the public good

Elon Musk, the multi-billionaire owner of Tesla and X, is suing artificial intelligence company OpenAI, accusing the firm of prioritising profit over developing AI for the public good.

Mr Musk is bringing the suit against OpenAI, which he co-founded, and its chief executive, Sam Altman, for breaching a contract by reneging on its pledge to develop AI carefully and make the tech widely available.

The company behind the ground-breaking generative AI chatbot, ChatGPT, has “been transformed into a closed-source de facto subsidiary of the largest technology company, Microsoft”, a court filing said.

The court action is the latest in a series of challenges to Mr Altman who was ousted from his position at OpenAI by the company board and briefly went to work at Microsoft, OpenAI’s biggest shareholder, before being returned to his post.

The AI giant was originally founded as a not-for-profit company but has grown to have commercial interests, which has caused tension between board members and founders.

By embracing a close relationship with Microsoft, OpenAI and its top executives have set that pact “aflame” and are “perverting” the company’s mission, Mr Musk alleges in the lawsuit.

“Under its new board, it is not just developing but is actually refining an AGI [artificial general intelligence] to maximize profits for Microsoft, rather than for the benefit of humanity”, the filing said.

More on Elon Musk

Please use Chrome browser for a more accessible video player

OpenAI unveils new video tool

A key part of OpenAI’s mission to benefit humanity, the court filing said, was to make the company software open source and share it, but this has not happened.

Instead, the company operates on a for-profit model.

Read more:
How the chaos at OpenAI has unfolded
Snapchat flagged in nearly half of child abuse imagery crimes in past year

Mr Musk has his own AI company, called xAI and has said OpenAI is not focused enough on the potential harms of AI.

As well as alleging breach of contract, Mr Musk’s claim said OpenAI is violating fiduciary duty and is engaged in unfair business practices. A jury trial has been sought by Mr Musk.

OpenAI and Microsoft have been contacted for comment.

Continue Reading

Business

Home Office figures show how vital immigration is to the economy

Published

on

By

Home Office figures show how vital immigration is to the economy

The Home Office immigration system statistics for 2023 tell a different story to the one that dominates the political discourse.

While government commentary and policy has focused on illegal migration via small boats, the largest driver of rising immigration is people coming to work, primarily in a health and care sector that would not function without them.

Some 616,000 work visas were issued in 2023, 337,240 to “primary applicants”, up 26% on 2022 and a staggering 250% rise on pre-pandemic levels, with a further 279,131 to their dependants, an increase of 81%.

Health and social care visas were the largest driver of the increase, the number almost doubling in a year to 146,477, with more than 100,000 of these granted to carers.

Money latest: How to quit your job and go travelling – by those who have done it

This expansion is the consequence of a deliberate policy decision in 2021 to make up a post-COVID, post-Brexit shortfall in staff.

With preferential status removed from European Union candidates, east Asia and west and southern Africa are the primary source of care workers.

More from Business

More than 18,000 came from India, with 7,000 from Bangladesh and Pakistan respectively. A further 18,000 came from Nigeria, 15,000 from Zimbabwe and 10,000 from Ghana.

Applications for skilled work visas in other sectors were broadly flat, perhaps reflecting a cooling labour market in a flatlining economy that has almost a million job vacancies and 2.5 million workers classified as long-term sick.

Please use Chrome browser for a more accessible video player

December: Warning of fraud around care jobs

Home Secretary James Cleverly has moved to cut numbers, banning care workers from bringing dependents, a change that may force recruiters to spread the net even wider to fill holes in British care homes.

The minimum salary threshold for skilled worker visas is also rising to £38,700 a year, up more than 50% and now more than the average salary, but such is the acute challenge of the NHS, health and care employers are exempt from paying the new figure.

One area where the government can point to falling immigration is among students but that will be no cause for celebration in higher education, where overseas candidates underwrite the cost of the domestic population.

Student visa applications fell 5% to 616,000, reflecting a more competitive international market and a tightening of rules from this year, which will see only postgraduates able to bring family members with them.

There was also a small decrease in the number of temporary visas granted to season workers in agriculture, who now overwhelmingly come from central Asia, but that was offset by a rise in youth mobility visas granted to under-30s from 12 eligible countries including Canada, Australia, New Zealand and South Korea.

From health and care to agriculture and education, cutting immigration will come at a price.

Continue Reading

Business

Sainsbury’s to cut 1,500 jobs in cost-cutting plan

Published

on

By

Sainsbury's to cut 1,500 jobs in cost-cutting plan

Sainsbury’s has revealed plans to cut around 1,500 roles as part of a previously announced shake-up of its operations.

Sky News revealed earlier this month how the company, which also owns Argos, had refused to rule out job losses under the strategy update for investors.

It included a greater focus on food within its supermarkets, claiming more space from general merchandise and clothing.

Sainsbury’s said it was also targeting greater use of automation under the plans, which aimed to save £1bn over three years to boost investment in the business.

The company said it hoped to redeploy many of the 1,500 people affected by the changes.

The jobs will go at its store support centre, contact centre operations, in its in-store bakeries and in its general merchandise fulfilment network.

Sainsbury’s said it had proposed to colleagues in its Widnes contact centre, who operate the Careline service, that they should transfer to an existing partner.

It said a more efficient way of providing its bakery service meant jobs would go in that part of the business.

Chief executive Simon Roberts said: ”Our Next Level Sainsbury’s strategy is about giving customers more of what they come to Sainsbury’s for – outstanding value, unbeatable quality food and great service.

“One of the ways we’re going to deliver on this promise is through our Save and Invest to Win programme.

“As we move into the next phase of our strategy, we are making some difficult, but necessary decisions.

“The proposals we’ve been talking to teams about today are important to ensure we’re better set up to focus on the things that create a real impact for our customers, delivering good food for all of us and building a platform for growth.

“I know today’s news is unsettling for affected colleagues and we will do everything we can to support them.”

Continue Reading

Trending