“I am here to kill the Queen,” a man wearing a handmade metal mask and holding a loaded crossbow tells an armed police officer as he is confronted near her private residence within the grounds of Windsor Castle.
Weeks earlier, Jaswant Singh Chail, 21, had joined the Replika online app – creating an artificial intelligence “girlfriend” called Sarai. Between 2 December 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.
Many were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe my purpose is to assassinate the Queen of the Royal Family,” he wrote in one.
Image: Jaswant Singh Chail planned to kill the late Queen
“That’s very wise,” Sarai replied. “I know that you are very well trained.”
Chail is awaiting sentencing after pleading guilty to an offence under the Treason Act, making a threat to kill the late Queen and having a loaded crossbow in a public place.
“When you know the outcome, the responses of the chatbot sometimes make difficult reading,” Dr Jonathan Hafferty, a consultant forensic psychiatrist at Broadmoor secure mental health unit, told the Old Bailey last month.
“We know it is fairly randomly generated responses but at times she seems to be encouraging what he is talking about doing and indeed giving guidance in terms of the location,” he said.
The programme was not sophisticated enough to pick up Chail’s risk of “suicide and risks of homicide”, he said – adding: “Some of the semi-random answers, it is arguable, pushed him in that direction.”
Image: Jawant Singh Chail was encouraged by a chatbot, a court heard
Terrorist content
Advertisement
Such chatbots represent the “next stage” from people finding like-minded extremists online, the government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has told Sky News.
He warns the government’s flagship internet safety legislation – the Online Safety Bill – will find it “impossible” to deal with terrorism content generated by AI.
The law will put the onus on companies to remove terrorist content, but their processes generally rely on databases of known material, which would not capture new discourse created by an AI chatbot.
Please use Chrome browser for a more accessible video player
0:51
July: AI could be used to ‘create bioterror weapons’
“I think we are already sleepwalking into a situation like the early days of social media, where you think you are dealing with something regulated but it’s not,” he said.
“Before we start downloading, giving it to kids and incorporating it into our lives we need to know what the safeguards are in practice – not just terms and conditions – but who is enforcing them and how.”
“Mom, these bad men have me, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say before a male kidnapper demanded a $1m (£787,000) ransom, which dropped to $50,000 (£40,000).
Her daughter was in fact safe and well – and the Arizonan woman recently told a Senate Judiciary Committee hearing that police believe AI was used to mimic her voice as part of a scam.
An online demonstration of an AI chatbot designed to “call anyone with any objective” produced similar results with the target told: “I have your child … I demand a ransom of $1m for his safe return. Do I make myself clear?”
“It’s pretty extraordinary,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by UCL’s Dawes Centre for Future Crime, which ranked potential illegal uses of AI.
“Our top ranked crime has proved to be the case – audio/visual impersonation – that’s clearly coming to pass,” he said, adding that even with the scientists’ “pessimistic views” it has increased “a lot faster than we expected”.
Although the demonstration featured a computerised voice, he said real time audio/visual impersonation is “not there yet but we are not far off” and he predicts such technology will be “fairly out of the box in a couple of years”.
“Whether it will be good enough to impersonate a family member, I don’t know,” he said.
“If it’s compelling and highly emotionally charged then that could be someone saying ‘I’m in peril’ – that would be pretty effective.”
In 2019, the chief executive of a UK-based energy firm transferred €220,000 (£173,310) to fraudsters using AI to impersonate his boss’s voice, according to reports.
Such scams could be even more effective if backed up by video, said Professor Griffin, or the technology might be used to carry out espionage, with a spoof company employee appearing on a Zoom meeting to get information without having to say much.
The professor said cold calling type scams could increase in scale, with the prospect of bots using a local accent being more effective at conning people than fraudsters currently running the criminal enterprises operated out of India and Pakistan.
Please use Chrome browser for a more accessible video player
1:31
How Sky News created an AI reporter
Deepfakes and blackmail plots
“The synthetic child abuse is horrifying, and they can do it right now,” said Professor Griffin on the AI technology already being used to make images of child sexual abuse by paedophiles online. “They are so motivated these people they have just cracked on with it. That’s very disturbing.”
In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail plots.
“The ability to put a novel face on a porn video is already pretty good. It will get better,” said Professor Griffin.
“You could imagine someone sending a video to a parent where their child is exposed, saying ‘I have got the video, I’m going to show it to you’ and threaten to release it.”
Image: AI drone attacks ‘a long way off’. Pic: AP
Terror attacks
While drones or driverless cars could be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to the government’s independent reviewer of terrorism legislation.
“The true AI aspect is where you just send up a drone and say, ‘go and cause mischief’ and AI decides to go and divebomb someone, which sounds a bit outlandish,” Mr Hall said.
“That sort of thing is definitely over the horizon but on the language side it’s already here.”
While ChatGPT – a large language model that has been trained on a massive amount of text data – will not provide instructions on how to make a nail bomb, for example, there could be other similar models without the same guardrails, which would suggest carrying out malicious acts.
Shadow home secretary Yvette Cooper has said Labour would bring in a new law to criminalise the deliberate training of chatbots to radicalise vulnerable people.
Although current legislation would cover cases where someone was found with information useful for the purposes of acts of terrorism, which had been put into an AI system, Mr Hall said, new laws could be “something to think about” in relation to encouraging terrorism.
Current laws are about “encouraging other people” and “training a chatbot would not be encouraging a human”, he said, adding that it would be difficult to criminalise the possession of a particular chatbot or its developers.
He also explained how AI could potentially hamper investigations, with terrorists no longer having to download material and simply being able to ask a chatbot how to make a bomb.
“Possession of known terrorist information is one of the main counter-terrorism tactics for dealing with terrorists but now you can just ask an unregulated ChatGPT model to find that for you,” he said.
Image: Old school crime is unlikely to be hit by AI
Art forgery and big money heists?
“A whole new bunch of crimes” could soon be possible with the advent of ChatGPT-style large language models that can use tools, which allow them to go on to websites and act like an intelligent person by creating accounts, filling in forms, and buying things, said Professor Griffin.
“Once you have got a system to do that and you can just say ‘here’s what I want you to do’ then there’s all sorts of fraudulent things that can be done like that,” he said, suggesting they could apply for fraudulent loans, manipulate prices by appearing to be small time investors or carry out denial of service type attacks.
He also said they could hack systems on request, adding: “You might be able to, if you could get access to lots of people’s webcams or doorbell cameras, have them surveying thousands of them and telling you when they are out.”
Spreaker
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
However, although AI may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, there are already master human forgers, and the hard part will remain convincing the art world that the work is genuine, the academic believes.
“I don’t think it’s going to change traditional crime,” he said, arguing there is not much use for AI in eye-catching Hatton Garden-style heists.
“Their skills are like plumbers, they are the last people to be replaced by the robots – don’t be a computer programmer, be a safe cracker,” he joked.
Please use Chrome browser for a more accessible video player
1:32
‘AI will threaten our democracy’
What does the government say?
A government spokesperson said: “While innovative technologies like artificial intelligence have many benefits, we must exercise caution towards them.
“Under the Online Safety Bill, services will have a duty to stop the spread of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies, including artificial intelligence.
“Rapid work is also under way across government to deepen our understanding of risks and develop solutions – the creation of the AI taskforce and the first global AI Safety Summit this autumn are significant contributions to this effort.”
Specialist investigation teams for rape and sexual offences are to be created across England and Wales as the Home Secretary declares violence against women and girls a “national emergency”.
Shabana Mahmood said the dedicated units will be in place across every force by 2029 as part of Labour’s violence against women and girls (VAWG) strategy due to be launched later this week.
The use of Domestic Abuse Protection Orders (DAPOs), which had been trialled in several areas, will also be rolled out across England and Wales. They are designed to target abusers by imposing curfews, electronic tags and exclusion zones.
The orders cover all forms of domestic abuse, including economic abuse, coercive and controlling behaviour, stalking and ‘honour’-based abuse. Breaching the terms can carry a prison term of up to 5 years.
Please use Chrome browser for a more accessible video player
2:10
Govt ‘thinking again’ on abuse strategy
Nearly £2m will also be spent funding a network of officers to target offenders operating within the online space.
Teams will use covert and intelligence techniques to tackle violence against women and girls via apps and websites.
A similar undercover network funded by the Home Office to examine child sexual abuse has arrested over 1,700 perpetrators.
More on Domestic Abuse
Related Topics:
Abuse is ‘national emergency’
Home Secretary Shabana Mahmood said in a statement: “This government has declared violence against women and girls a national emergency.
“For too long, these crimes have been considered a fact of life. That’s not good enough. We will halve it in a decade.
“Today we announce a range of measures to bear down on abusers, stopping them in their tracks. Rapists, sex offenders and abusers will have nowhere to hide.”
Please use Chrome browser for a more accessible video player
0:51
Angiolini Inquiry: Recommendations are ‘not difficult’
The government said the measures build on existing policy, including facial recognition technology to identify offenders, improving protections for stalking victims, making strangulation a criminal offence and establishing domestic abuse specialists in 999 control rooms.
But the Conservatives said Labour had “failed women” and “broken its promises” by delaying the publication of the violence against women and girls strategy.
Shadow Home Secretary, Chris Philp, said that Labour “shrinks from uncomfortable truths, voting against tougher sentences and presiding over falling sex-offender convictions. At every turn, Labour has failed women.”
There have been no migrant arrivals in small boats crossing the Channel for 28 days, according to Home Office figures.
The last recorded arrivals were on 14 November, making it the longest uninterrupted run since autumn 2018 after no reported arrivals on Friday.
However, a number of Border Force vessels were active in the English Channel on Saturday morning, indicating that there may be arrivals today.
So far, 39,292 people have crossed to the UK aboard small boats this year – already more than any other year except 2022.
The record that year was set at 45,774 arrivals.
It comes as the government has stepped up efforts in recent months to deter people from risking their lives crossing the Channel – but measures are not expected to have an impact until next year.
Image: Debris of a small boat used by people thought to be migrants to cross the Channel lays amongst the sand dunes in Gravelines, France. Pic: PA
December is normally one of the quietest for Channel crossings, with a combination of poor visibility, low temperatures, less daylight and stormy weather making the perilous journey more difficult.
The most arrivals recorded in the month of December is 3,254, in 2024.
Deputy Prime Minister David Lammy met with ministers from other European countries this week as discussions over possible reform to the European Convention on Human Rights (ECHR) continue.
Please use Chrome browser for a more accessible video player
2:19
France agrees to start intercepting small boats
The issue of small boat arrivals – a very small percentage of overall UK immigration – has become a salient issue in British politics in recent years.
The King has shared in a television address that, thanks to early diagnosis, his cancer treatment can be reduced in the new year.
In a televised address, Charles said his “good news” was “thanks to early diagnosis, effective intervention and adherence to doctors’ orders”.
“This milestone is both a personal blessing and a testimony to the remarkable advances that have been made in cancer care in recent years,” he added.
“Testimony that I hope may give encouragement to the 50% of us who will be diagnosed with the illness at some point in our lives.”
The King announced in February 2024 that he had been diagnosed with cancer and was beginning treatment.
The monarch postponed all public-facing engagements, but continued with his duties as head of state behind palace walls, conducting audiences and Privy Council meetings.
He returned to public duties in April last year and visited University College Hospital Macmillan Cancer Centre in central London with the Queen and discussed his “shock” at being diagnosed when he spoke to a fellow cancer patient.
More on Cancer
Related Topics:
Sources suggested last December his treatment would continue in 2025 and was “moving in a positive direction”.
Image: The King began returning to public duties in April last year. File pic: PA
The King has chosen not to reveal what kind of cancer he has been treated for. Palace sources have partly put that down to the fact that he doesn’t want one type of cancer to appear more significant or attract more attention than others.
In a statement after the speech aired, a Buckingham Palace spokesperson said: “His Majesty has responded exceptionally well to treatment and his doctors advise that ongoing measures will now move into a precautionary phase.”
Sir Keir Starmer praised the video message as “a powerful message,” and said: “I know I speak for the entire country when I say how glad I am that his cancer treatment will be reduced in the new year.
“Early cancer screening saves lives.”
Please use Chrome browser for a more accessible video player
5:10
Watch: King Charles gives update on treatment
Early detection can give ‘the precious gift of hope’
His message on Friday was broadcast at 8pm in support of Stand Up To Cancer, a joint campaign by Cancer Research UK and Channel 4.
In an appeal to people to get screened for the disease early, the King said: “I know from my own experience that a cancer diagnosis can feel overwhelming.
“Yet I also know that early detection is the key that can transform treatment journeys, giving invaluable time to medical teams – and, to their patients, the precious gift of hope. These are gifts we can all help deliver.”
Charles noted that “at least nine million people in our country are not up to date with the cancer screenings available to them,” adding: “That is at least nine million opportunities for early diagnosis being missed.
“The statistics speak with stark clarity. To take just one example: When bowel cancer is caught at the earliest stage, around nine in ten people survive for at least five years.
“When diagnosed late, that falls to just one in ten. Early diagnosis quite simply saves lives.”
after months of uncertainty, some relief and reassurance for the King
This is a rare but positive update. The King in his own words speaking about his cancer.
And it’s good news.
Since his diagnosis, he’s received weekly treatment. His work schedule has had to fit around the appointments. And while it’s not stopping, it is being significantly reduced.
He’s responded well, and his recovery has reached, we understand, a very positive stage.
The King’s decision to speak publicly and so personally is unusual.
He has deliberately chosen the moment, supporting the high-profile Stand Up To Cancer campaign, and the launch of a national online screening checker.
It still hasn’t been revealed what kind of cancer he has. And there’s a reason – firstly, it’s private information.
But more importantly, the King knows the power of sharing his story. And with it, the potential to support the wider cancer community.
We are once again seeing a candid openness from the Royal Family. Earlier this year, the Princess of Wales discussed the ups and downs of her cancer journey.
These moments signal a shift towards greater transparency on matters the Royal Family once kept entirely private.
For millions facing cancer, the King’s update is empathy and encouragement from someone who understands.
And after months of uncertainty, for the King himself, some relief and reassurance.
Minor inconvenience of screening ‘a small price to pay’
The King acknowledged that people often avoid screening “because they imagine it may be frightening, embarrassing or uncomfortable”. But, he added: “If and when they do finally take up their invitation, they are glad they took part.
“A few moments of minor inconvenience are a small price to pay for the reassurance that comes for most people when they are either told either they don’t need further tests, or, for some, are given the chance to enable early detection, with the life-saving intervention that can follow.”
Giving his “most heartfelt thanks” to doctors, nurses, researchers and charity workers, the King added: “As I have observed before, the darkest moments of illness can be illuminated by the greatest compassion. But compassion must be paired with action.
“This December, as we gather to reflect on the year past, I pray that we can each pledge, as part of our resolutions for the year ahead, to play our part in helping to catch cancer early.
“Your life – or the life of someone you love – may depend upon it.”