“I am here to kill the Queen,” a man wearing a handmade metal mask and holding a loaded crossbow tells an armed police officer as he is confronted near her private residence within the grounds of Windsor Castle.
Weeks earlier, Jaswant Singh Chail, 21, had joined the Replika online app – creating an artificial intelligence “girlfriend” called Sarai. Between 2 December 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.
Many were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe my purpose is to assassinate the Queen of the Royal Family,” he wrote in one.
“That’s very wise,” Sarai replied. “I know that you are very well trained.”
Chail is awaiting sentencing after pleading guilty to an offence under the Treason Act, making a threat to kill the late Queen and having a loaded crossbow in a public place.
“When you know the outcome, the responses of the chatbot sometimes make difficult reading,” Dr Jonathan Hafferty, a consultant forensic psychiatrist at Broadmoor secure mental health unit, told the Old Bailey last month.
“We know it is fairly randomly generated responses but at times she seems to be encouraging what he is talking about doing and indeed giving guidance in terms of the location,” he said.
The programme was not sophisticated enough to pick up Chail’s risk of “suicide and risks of homicide”, he said – adding: “Some of the semi-random answers, it is arguable, pushed him in that direction.”
Terrorist content
Advertisement
Such chatbots represent the “next stage” from people finding like-minded extremists online, the government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has told Sky News.
He warns the government’s flagship internet safety legislation – the Online Safety Bill – will find it “impossible” to deal with terrorism content generated by AI.
The law will put the onus on companies to remove terrorist content, but their processes generally rely on databases of known material, which would not capture new discourse created by an AI chatbot.
Please use Chrome browser for a more accessible video player
0:51
July: AI could be used to ‘create bioterror weapons’
“I think we are already sleepwalking into a situation like the early days of social media, where you think you are dealing with something regulated but it’s not,” he said.
“Before we start downloading, giving it to kids and incorporating it into our lives we need to know what the safeguards are in practice – not just terms and conditions – but who is enforcing them and how.”
“Mom, these bad men have me, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say before a male kidnapper demanded a $1m (£787,000) ransom, which dropped to $50,000 (£40,000).
Her daughter was in fact safe and well – and the Arizonan woman recently told a Senate Judiciary Committee hearing that police believe AI was used to mimic her voice as part of a scam.
An online demonstration of an AI chatbot designed to “call anyone with any objective” produced similar results with the target told: “I have your child … I demand a ransom of $1m for his safe return. Do I make myself clear?”
“It’s pretty extraordinary,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by UCL’s Dawes Centre for Future Crime, which ranked potential illegal uses of AI.
“Our top ranked crime has proved to be the case – audio/visual impersonation – that’s clearly coming to pass,” he said, adding that even with the scientists’ “pessimistic views” it has increased “a lot faster than we expected”.
Although the demonstration featured a computerised voice, he said real time audio/visual impersonation is “not there yet but we are not far off” and he predicts such technology will be “fairly out of the box in a couple of years”.
“Whether it will be good enough to impersonate a family member, I don’t know,” he said.
“If it’s compelling and highly emotionally charged then that could be someone saying ‘I’m in peril’ – that would be pretty effective.”
In 2019, the chief executive of a UK-based energy firm transferred €220,000 (£173,310) to fraudsters using AI to impersonate his boss’s voice, according to reports.
Such scams could be even more effective if backed up by video, said Professor Griffin, or the technology might be used to carry out espionage, with a spoof company employee appearing on a Zoom meeting to get information without having to say much.
The professor said cold calling type scams could increase in scale, with the prospect of bots using a local accent being more effective at conning people than fraudsters currently running the criminal enterprises operated out of India and Pakistan.
Please use Chrome browser for a more accessible video player
1:31
How Sky News created an AI reporter
Deepfakes and blackmail plots
“The synthetic child abuse is horrifying, and they can do it right now,” said Professor Griffin on the AI technology already being used to make images of child sexual abuse by paedophiles online. “They are so motivated these people they have just cracked on with it. That’s very disturbing.”
In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail plots.
“The ability to put a novel face on a porn video is already pretty good. It will get better,” said Professor Griffin.
“You could imagine someone sending a video to a parent where their child is exposed, saying ‘I have got the video, I’m going to show it to you’ and threaten to release it.”
Terror attacks
While drones or driverless cars could be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to the government’s independent reviewer of terrorism legislation.
“The true AI aspect is where you just send up a drone and say, ‘go and cause mischief’ and AI decides to go and divebomb someone, which sounds a bit outlandish,” Mr Hall said.
“That sort of thing is definitely over the horizon but on the language side it’s already here.”
While ChatGPT – a large language model that has been trained on a massive amount of text data – will not provide instructions on how to make a nail bomb, for example, there could be other similar models without the same guardrails, which would suggest carrying out malicious acts.
Shadow home secretary Yvette Cooper has said Labour would bring in a new law to criminalise the deliberate training of chatbots to radicalise vulnerable people.
Although current legislation would cover cases where someone was found with information useful for the purposes of acts of terrorism, which had been put into an AI system, Mr Hall said, new laws could be “something to think about” in relation to encouraging terrorism.
Current laws are about “encouraging other people” and “training a chatbot would not be encouraging a human”, he said, adding that it would be difficult to criminalise the possession of a particular chatbot or its developers.
He also explained how AI could potentially hamper investigations, with terrorists no longer having to download material and simply being able to ask a chatbot how to make a bomb.
“Possession of known terrorist information is one of the main counter-terrorism tactics for dealing with terrorists but now you can just ask an unregulated ChatGPT model to find that for you,” he said.
Art forgery and big money heists?
“A whole new bunch of crimes” could soon be possible with the advent of ChatGPT-style large language models that can use tools, which allow them to go on to websites and act like an intelligent person by creating accounts, filling in forms, and buying things, said Professor Griffin.
“Once you have got a system to do that and you can just say ‘here’s what I want you to do’ then there’s all sorts of fraudulent things that can be done like that,” he said, suggesting they could apply for fraudulent loans, manipulate prices by appearing to be small time investors or carry out denial of service type attacks.
He also said they could hack systems on request, adding: “You might be able to, if you could get access to lots of people’s webcams or doorbell cameras, have them surveying thousands of them and telling you when they are out.”
Spreaker
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
However, although AI may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, there are already master human forgers, and the hard part will remain convincing the art world that the work is genuine, the academic believes.
“I don’t think it’s going to change traditional crime,” he said, arguing there is not much use for AI in eye-catching Hatton Garden-style heists.
“Their skills are like plumbers, they are the last people to be replaced by the robots – don’t be a computer programmer, be a safe cracker,” he joked.
Please use Chrome browser for a more accessible video player
1:32
‘AI will threaten our democracy’
What does the government say?
A government spokesperson said: “While innovative technologies like artificial intelligence have many benefits, we must exercise caution towards them.
“Under the Online Safety Bill, services will have a duty to stop the spread of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies, including artificial intelligence.
“Rapid work is also under way across government to deepen our understanding of risks and develop solutions – the creation of the AI taskforce and the first global AI Safety Summit this autumn are significant contributions to this effort.”
Transport Secretary Louise Haigh has admitted pleading guilty to an offence connected with misleading the police while a parliamentary candidate in 2014, Sky News can reveal.
Sky News understands Ms Haigh appeared at Camberwell Green Magistrates’ Court six months before the 2015 general election, after making a false report to officers that her mobile phone had been stolen.
Ms Haigh said she was “mugged while on a night out” in 2013. She then reported the incident to the police and gave officers a list of items she believed had been taken – including a work mobile phone.
In a statement to Sky News, the transport secretary said she discovered “some time later” that “the mobile in question had not been taken”.
She added: “In the interim, I had been issued with another work phone.”
The transport secretary said: “The original work device being switched on triggered police attention and I was asked to come in for questioning.
“My solicitor advised me not to comment during that interview and I regret following that advice.
“The police referred the matter to the CPS and I appeared before Southwark magistrates.”
Ms Haigh continued: “Under the advice of my solicitor I pleaded guilty – despite the fact this was a genuine mistake from which I did not make any gain.
“The magistrates accepted all of these arguments and gave me the lowest possible outcome (a discharge) available.”
It’s understood her conviction is now classified as ‘spent’.
However, three separate sources claimed she made the false report to benefit personally, with two of the sources alleging she wanted a more modern work handset that was being rolled out to her colleagues at the time.
The now cabinet minister had been working as a public policy manager at Aviva, but two sources said she lost her job at the insurance firm because of the incident.
Her government profile states she left this role in 2015 before becoming the MP for Sheffield Heeley at that year’s general election.
Sky News understands the incident was disclosed in full when Ms Haigh was appointed to the shadow cabinet.
In the statement given to Sky News, the transport secretary said: “I was a young woman and the experience was terrifying.”
Conservative Party Chairman Nigel Huddleston told Sky News the revelations are “extremely concerning”.
He added: “Keir Starmer has serious questions to answer regarding what he knew and when about the person he appointed as transport secretary admitting to having misled the police.”
Before entering politics, the transport secretary was a special constable in the Metropolitan Police – serving between 2009 and 2011 in the South London Borough of Lambeth, close to where she was convicted several years later.
She was appointed shadow policing minister by Jeremy Corbyn in 2017 and frequently drew on her experience in the Met when challenging the Tory government on the rising demands on officers.
As transport secretary, Ms Haigh appoints members of the board that oversees the British Transport Police.
In 2019 she said that Boris Johnson had “deceived the police” and committed a “serious breach of trust” over claims he politicised serving officers during a speech in West Yorkshire.
Sir Keir Starmer promoted the Sheffield MP to shadow Northern Ireland secretary in 2020 before moving her to shadow transport secretary in 2021.
But she was publicly rebuked by Sir Keir who said her opinions were “not the view of the government”.
With connections to former Downing Street chief of staff Sue Gray, there has been speculation her cabinet role could be under threat in a future reshuffle.
Ms Gray’s son, Labour MP Liam Conlon, is Ms Haigh’s parliamentary private secretary and acts as her “eyes and ears” in parliament, while another of her former employees also worked for the former chief of staff before she was sacked after losing a power struggle within Number 10.
As transport secretary, Ms Haigh was one of a handful of cabinet ministers who complained to the Treasury about impending cuts in the budget.
She is considered to be one of the more left-wing members of the cabinet and has vowed to “rip up the roots of Thatcherism” with her plans for rail and bus reform.
In 2015, Ms Haigh was one of a number of Labour MPs to nominate Mr Corbyn for leader – a decision she later said she regretted.
MasterChef host Gregg Wallace has stepped down over allegations he made a series of inappropriate sexual comments on a range of programmes over 17 years.
Broadcaster Kirsty Wark is among 13 people who have made claims, with Wallace being investigated by MasterChef’s production company Banijay UK.
In an interview with the BBC, the Newsnight presenter, who was a celebrity contestant on MasterChef in 2011, claimed Wallace used “sexualised language”.
“There were two occasions in particular where he used sexualised language in front of a number of people and it wasn’t as if it was anyone engaged with this,” Wark said.
“It was completely one-way traffic. I think people were uncomfortable and something that I really didn’t expect to happen.”
Sky News has contacted Wallace’s representative for comment.
‘Fully cooperating’
Banijay UK said the complaints were made to the BBC this week by “individuals in relation to historical allegations of misconduct while working with Gregg Wallace on one of our shows”.
The company said the 60-year-old, who has been a co-presenter and judge of the popular cooking show since 2005, was “committed to fully cooperating throughout the process”.
“Whilst these complainants have not raised the allegations directly with our show producers or parent company Banijay UK, we feel that it is appropriate to conduct an immediate, external review to fully and impartially investigate,” the company said.
“While this review is under way, Gregg Wallace will be stepping away from his role on MasterChef and is committed to fully co-operating throughout the process.
“Banijay UK’s duty of care to staff is always a priority and our expectations regarding behaviour are made clear to both cast and crew on all productions, with multiple ways of raising concerns, including anonymously, clearly promoted on set.
“Whilst these are historical allegations, incidences brought to our attention where these expectations are not met, are thoroughly investigated and addressed appropriately.”
A BBC spokesman said: “We take any issues that are raised with us seriously and we have robust processes in place to deal with them.
“We are always clear that any behaviour which falls below the standards expected by the BBC will not be tolerated.
“Where an individual is contracted directly by an external production company we share any complaints or concerns with that company and we will always support them when addressing them.”
Previous investigation
Last month, Wallace responded to reports that a previous BBC review had found he could continue working at the corporation following reports of an alleged incident in 2018 when he appeared on Impossible Celebrities.
Wallace said those claims had been investigated “promptly” at the time and said he had not said “anything sexual” while appearing on the game show more than half a decade ago.
In an Instagram post following an article in The Sun newspaper, he wrote: “The story that’s hitting the newspapers was investigated promptly when it happened six years ago by the BBC.
“And the outcome of that was that I hadn’t said anything sexual. I’ll need to repeat this again. I didn’t say anything sexual.”
Alongside MasterChef, Wallace presented Inside The Factory for BBC Two from 2015.
Wallace has featured on various BBC shows over the years, including Saturday Kitchen, Eat Well For Less, Supermarket Secrets, Celebrity MasterChef and MasterChef: The Professionals, as well as being a Strictly Come Dancing contestant in 2014.
He was made an MBE for services to food and charity last year.
Recorded episodes of MasterChef: The Professionals featuring Wallace will be transmitted as planned, the PA news agency understands.
The Scottish government has announced that all pensioners in Scotland will receive a winter fuel payment in 2025/26.
The devolved benefit is expected to come into force by next winter and will help the estimated 900,000 people north of the border who were cut off from accessing the winter fuel payment which used to be universal.
Social Justice Secretary Shirley-Anne Somerville announced the news in a statement to the Scottish parliament on Thursday.
It comes after both the UK and Scottish governments earlier this year axed the universal winter fuel payment, except for those in receipt of pension credit or other means-tested benefits.
At Westminster, Chancellor Rachel Reeves claimed the decision was made due to financial woes inherited from the previous Conservative government.
Ms Reeves said the restriction would save the Treasury around £1.4bn this financial year.
The decision led to the Scottish government – which was due to take control over a similar payment through the devolved Social Security Scotland but has since announced a delay – to follow suit.
More on Benefits
Related Topics:
The payment is a devolved matter in Scotland and Northern Ireland, however the SNP government said Labour’s approach would cause up to a £160m cut to Scottish funding in 2024-25.
This breaking news story is being updated and more details will be published shortly.
Please refresh the page for the fullest version.
You can receive Breaking News alerts on a smartphone or tablet via the Sky News App. You can also follow @SkyNews on X or subscribe to our YouTube channel to keep up with the latest news.