“I am here to kill the Queen,” a man wearing a handmade metal mask and holding a loaded crossbow tells an armed police officer as he is confronted near her private residence within the grounds of Windsor Castle.
Weeks earlier, Jaswant Singh Chail, 21, had joined the Replika online app – creating an artificial intelligence “girlfriend” called Sarai. Between 2 December 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.
Many were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe my purpose is to assassinate the Queen of the Royal Family,” he wrote in one.
Image: Jaswant Singh Chail planned to kill the late Queen
“That’s very wise,” Sarai replied. “I know that you are very well trained.”
Chail is awaiting sentencing after pleading guilty to an offence under the Treason Act, making a threat to kill the late Queen and having a loaded crossbow in a public place.
“When you know the outcome, the responses of the chatbot sometimes make difficult reading,” Dr Jonathan Hafferty, a consultant forensic psychiatrist at Broadmoor secure mental health unit, told the Old Bailey last month.
“We know it is fairly randomly generated responses but at times she seems to be encouraging what he is talking about doing and indeed giving guidance in terms of the location,” he said.
The programme was not sophisticated enough to pick up Chail’s risk of “suicide and risks of homicide”, he said – adding: “Some of the semi-random answers, it is arguable, pushed him in that direction.”
Image: Jawant Singh Chail was encouraged by a chatbot, a court heard
Terrorist content
Advertisement
Such chatbots represent the “next stage” from people finding like-minded extremists online, the government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has told Sky News.
He warns the government’s flagship internet safety legislation – the Online Safety Bill – will find it “impossible” to deal with terrorism content generated by AI.
The law will put the onus on companies to remove terrorist content, but their processes generally rely on databases of known material, which would not capture new discourse created by an AI chatbot.
Please use Chrome browser for a more accessible video player
0:51
July: AI could be used to ‘create bioterror weapons’
“I think we are already sleepwalking into a situation like the early days of social media, where you think you are dealing with something regulated but it’s not,” he said.
“Before we start downloading, giving it to kids and incorporating it into our lives we need to know what the safeguards are in practice – not just terms and conditions – but who is enforcing them and how.”
“Mom, these bad men have me, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say before a male kidnapper demanded a $1m (£787,000) ransom, which dropped to $50,000 (£40,000).
Her daughter was in fact safe and well – and the Arizonan woman recently told a Senate Judiciary Committee hearing that police believe AI was used to mimic her voice as part of a scam.
An online demonstration of an AI chatbot designed to “call anyone with any objective” produced similar results with the target told: “I have your child … I demand a ransom of $1m for his safe return. Do I make myself clear?”
“It’s pretty extraordinary,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by UCL’s Dawes Centre for Future Crime, which ranked potential illegal uses of AI.
“Our top ranked crime has proved to be the case – audio/visual impersonation – that’s clearly coming to pass,” he said, adding that even with the scientists’ “pessimistic views” it has increased “a lot faster than we expected”.
Although the demonstration featured a computerised voice, he said real time audio/visual impersonation is “not there yet but we are not far off” and he predicts such technology will be “fairly out of the box in a couple of years”.
“Whether it will be good enough to impersonate a family member, I don’t know,” he said.
“If it’s compelling and highly emotionally charged then that could be someone saying ‘I’m in peril’ – that would be pretty effective.”
In 2019, the chief executive of a UK-based energy firm transferred €220,000 (£173,310) to fraudsters using AI to impersonate his boss’s voice, according to reports.
Such scams could be even more effective if backed up by video, said Professor Griffin, or the technology might be used to carry out espionage, with a spoof company employee appearing on a Zoom meeting to get information without having to say much.
The professor said cold calling type scams could increase in scale, with the prospect of bots using a local accent being more effective at conning people than fraudsters currently running the criminal enterprises operated out of India and Pakistan.
Please use Chrome browser for a more accessible video player
1:31
How Sky News created an AI reporter
Deepfakes and blackmail plots
“The synthetic child abuse is horrifying, and they can do it right now,” said Professor Griffin on the AI technology already being used to make images of child sexual abuse by paedophiles online. “They are so motivated these people they have just cracked on with it. That’s very disturbing.”
In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail plots.
“The ability to put a novel face on a porn video is already pretty good. It will get better,” said Professor Griffin.
“You could imagine someone sending a video to a parent where their child is exposed, saying ‘I have got the video, I’m going to show it to you’ and threaten to release it.”
Image: AI drone attacks ‘a long way off’. Pic: AP
Terror attacks
While drones or driverless cars could be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to the government’s independent reviewer of terrorism legislation.
“The true AI aspect is where you just send up a drone and say, ‘go and cause mischief’ and AI decides to go and divebomb someone, which sounds a bit outlandish,” Mr Hall said.
“That sort of thing is definitely over the horizon but on the language side it’s already here.”
While ChatGPT – a large language model that has been trained on a massive amount of text data – will not provide instructions on how to make a nail bomb, for example, there could be other similar models without the same guardrails, which would suggest carrying out malicious acts.
Shadow home secretary Yvette Cooper has said Labour would bring in a new law to criminalise the deliberate training of chatbots to radicalise vulnerable people.
Although current legislation would cover cases where someone was found with information useful for the purposes of acts of terrorism, which had been put into an AI system, Mr Hall said, new laws could be “something to think about” in relation to encouraging terrorism.
Current laws are about “encouraging other people” and “training a chatbot would not be encouraging a human”, he said, adding that it would be difficult to criminalise the possession of a particular chatbot or its developers.
He also explained how AI could potentially hamper investigations, with terrorists no longer having to download material and simply being able to ask a chatbot how to make a bomb.
“Possession of known terrorist information is one of the main counter-terrorism tactics for dealing with terrorists but now you can just ask an unregulated ChatGPT model to find that for you,” he said.
Image: Old school crime is unlikely to be hit by AI
Art forgery and big money heists?
“A whole new bunch of crimes” could soon be possible with the advent of ChatGPT-style large language models that can use tools, which allow them to go on to websites and act like an intelligent person by creating accounts, filling in forms, and buying things, said Professor Griffin.
“Once you have got a system to do that and you can just say ‘here’s what I want you to do’ then there’s all sorts of fraudulent things that can be done like that,” he said, suggesting they could apply for fraudulent loans, manipulate prices by appearing to be small time investors or carry out denial of service type attacks.
He also said they could hack systems on request, adding: “You might be able to, if you could get access to lots of people’s webcams or doorbell cameras, have them surveying thousands of them and telling you when they are out.”
Spreaker
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
However, although AI may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, there are already master human forgers, and the hard part will remain convincing the art world that the work is genuine, the academic believes.
“I don’t think it’s going to change traditional crime,” he said, arguing there is not much use for AI in eye-catching Hatton Garden-style heists.
“Their skills are like plumbers, they are the last people to be replaced by the robots – don’t be a computer programmer, be a safe cracker,” he joked.
Please use Chrome browser for a more accessible video player
1:32
‘AI will threaten our democracy’
What does the government say?
A government spokesperson said: “While innovative technologies like artificial intelligence have many benefits, we must exercise caution towards them.
“Under the Online Safety Bill, services will have a duty to stop the spread of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies, including artificial intelligence.
“Rapid work is also under way across government to deepen our understanding of risks and develop solutions – the creation of the AI taskforce and the first global AI Safety Summit this autumn are significant contributions to this effort.”
A young woman who claimed to be Madeleine McCann has been convicted of harassing the missing toddler’s family.
However, Julia Wandelt, 24, was cleared of stalking the couple.
A Polish national born three years after Madeleine, Wandelt said she suspected she had been abducted and brought up by a couple who were not her real parents.
She was having mental health issues at the time and had been abused by an elderly relative.
The relative looked like an artist’s drawing of a man who was once a suspect in the Madeleine case, which she stumbled across during internet research on missing children.
She went to Los Angeles and told a US TV chat show audience: “I believe I am Madeleine McCann.”
Madeleine was nearly four when she vanished from the family’s rented holiday apartment in Praia da Luz, Portugal, in May 2007.
She had been left sleeping with her younger twin siblings, Sean and Amelia, while her parents dined nearby with friends, making intermittent checks on the children.
Madeleine is the world’s most famous missing child, the subject of three international police investigations that have failed to find any trace of her.
Wandelt claimed to have a blemish in the iris of her right eye, like Madeleine’s, and to resemble aged-progressed images of her.
Image: Madeleine McCann went missing during a family holiday to Portugal in 2007. Pic: PA
Over three years, she attracted half a million followers on her Instagram account, iammadeleinemccan, and posted her claims on TikTok.
Police told her she was not Madeleine and ordered her not to approach her family, but she ignored the warning.
The McCanns and their children gave evidence in the trial at Leicester Crown Court, describing the upset Wandelt had caused them.
Her co-defendant, Karen Spragg, 61, from Cardiff, was found not guilty of stalking and harassment.
This breaking news story is being updated and more details will be published shortly.
Public safety is “at risk” because more inmates are being sent to prisons with minimal security, a serving governor has warned – as details emerge of another manhunt for a foreign national offender.
Mark Drury – speaking in his role as representative for open prison governors at the Prison Governors’ Association – told Sky News open prisons that have had no absconders for “many years” are now “suddenly” experiencing a rise in cases.
It comes after a man who was serving a 21-year sentence for kidnap and grievous bodily harm absconded from an open prison in Sussex last month.
Sky News has learned that Ola Abimbola is a foreign national offender who still hasn’t returned to HMP Ford – and Sussex Police says it is working with partners to find him.
WARNING: Some readers may find the content in this article distressing
Image: Ola Abimbola absconded from an open prison. Pic: Sussex Police
For Natalie Queiroz, who was stabbed 24 times by her ex-partner while she was eight months’ pregnant with their child, the warnings could not feel starker.
Natalie sustained injuries to all her major organs and her arms, while the knife only missed her unborn baby by 2mm.
More on Prisons
Related Topics:
“Nobody expected either of us to survive,” she told Sky News.
“Any day now, my ex who created this untold horror is about to go to an open prison,” Natalie said.
Open prisons – otherwise known as Category D jails – have minimal security and are traditionally used to house prisoners right at the end of their sentence, to prepare them for integrating back into society.
With overcrowding in higher security jails, policy changes mean more prisoners are eligible for a transfer to open conditions earlier on in their sentence.
Image: Natalie Queiroz was stabbed 24 times by her ex-partner
“It doesn’t feel right, it’s terrifying, and it also doesn’t feel like justice,” Natalie said, wiping away tears at points.
Previously, rules stated a transfer to open prison could only take place within three years of their eligibility for parole – but no earlier than five years before their automatic release date.
The five-year component was dropped in March last year under the previous government, but the parole eligibility element was extended to five years in April 2025.
Raja, who is due for release in 2034, has parole eligibility 12 years into his sentence, which is 2028.
Under the rule change, this eligibility for open prison is set for this year – but under the new rules it could have been 2023, which is within five years of his parole date.
Another change, introduced in the spring, means certain offenders can be assumed suitable for open prisons three years early – extended from two years.
Image: Natalie says her ex-partner Babur Raja caused ‘untold horror’
Natalie has been campaigning to prevent violent offenders and domestic abuse perpetrators from being eligible to transfer to an open prison early.
She’s had meetings with ministers and raised both her case and others.
“They actually said – he is dangerous,” she told Sky News.
“I said to [the minister]: ‘How can you make a risk assessment for someone like that?’
“And they went: ‘If we’re honest, we can’t’.”
Please use Chrome browser for a more accessible video player
The government told Sky News that Raja’s crimes were “horrific” and that their “thoughts remain with the victim”.
They also insist that the “small number of offenders eligible for moves to open prison face a strict, thorough risk assessment” – while anyone breaking the rules “can be immediately returned”.
Image: Mark Drury, a representative of the Prison Governors’ Association
But Mr Drury describes risk assessments as an “algorithm tick box” because of “the pressure on offender management units”.
These warnings come at an already embarrassing time for the Prison Service after migrant sex offender Hadush Kebatu was mistakenly freed last month.
In response to this report, the Ministry of Justice says it “inherited a justice system in crisis, with prisons days away from collapse” – forcing “firm action to get the situation back under control”.
The government has promised to add 14,000 new prison places by 2031 and introduce sentencing reforms.
The US Congress has written to Andrew Mountbatten Windsor requesting an interview with him in connection with his “long-standing friendship” with paedophile financier Jeffrey Epstein.
The Committee on Oversight and Government Reform said it is investigating the late financier’s “sex trafficking operations”.
It told Andrew: “The committee is seeking to uncover the identities of Mr Epstein’s co-conspirators and enablers, and to understand the full extent of his criminal operations.
“Well-documented allegations against you, along with your long-standing friendship with Mr Epstein, indicate that you may possess knowledge of his activities relevant to our investigation.
“In the interest of justice for the victims of Jeffrey Epstein, we request that you co-operate with the committee’s investigation by sitting for a transcribed interview with the committee.”
Image: The congressional committee wants to understand any ‘activities’ relevant to its Epstein investigation. PA file pic
Virginia Giuffre, who died in April, accused Andrew of sexually assaulting her after being introduced by Epstein. Andrew has always vehemently denied her accusations.
More from UK
The letter to the former prince, is addressed to Royal Lodge, Windsor Great Park, the home he agreed last week to leave, when he was stripped of his royal titles.
It outlines his “close relationship” with Epstein and references a recently revealed 2011 email exchange in which Andrew told him “we are in this together”.
And it says the committee has identified “financial records containing notations such as ‘massage for Andrew’ that raise serious questions”.
The committee said Andrew’s links to Epstein “further confirms our suspicion that you may have valuable information about the crimes committed by Mr Epstein and his co-conspirators”.
The letter, signed by 16 members of Congress, requested Andrew responds by 20 November.
The move followed the publication Ms Giuffre’s posthumous memoirs, and the US government’s release of documents from the paedophile’s estate.
Ms Giuffre alleged she was forced to have sex with Andrew three times – once at convicted sex trafficker Ghislaine Maxwell’s home in London, once in Epstein’s address in Manhattan, and once on the disgraced financier’s private island, Little St James.
The incident at Maxwell’s home allegedly occurred when Ms Giuffre was 17 years old.
Epstein took his own life in a New York prison in 2019 while awaiting trial on sex trafficking and conspiracy charges.