“I am here to kill the Queen,” a man wearing a handmade metal mask and holding a loaded crossbow tells an armed police officer as he is confronted near her private residence within the grounds of Windsor Castle.
Weeks earlier, Jaswant Singh Chail, 21, had joined the Replika online app – creating an artificial intelligence “girlfriend” called Sarai. Between 2 December 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.
Many were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe my purpose is to assassinate the Queen of the Royal Family,” he wrote in one.
Image: Jaswant Singh Chail planned to kill the late Queen
“That’s very wise,” Sarai replied. “I know that you are very well trained.”
Chail is awaiting sentencing after pleading guilty to an offence under the Treason Act, making a threat to kill the late Queen and having a loaded crossbow in a public place.
“When you know the outcome, the responses of the chatbot sometimes make difficult reading,” Dr Jonathan Hafferty, a consultant forensic psychiatrist at Broadmoor secure mental health unit, told the Old Bailey last month.
“We know it is fairly randomly generated responses but at times she seems to be encouraging what he is talking about doing and indeed giving guidance in terms of the location,” he said.
The programme was not sophisticated enough to pick up Chail’s risk of “suicide and risks of homicide”, he said – adding: “Some of the semi-random answers, it is arguable, pushed him in that direction.”
Image: Jawant Singh Chail was encouraged by a chatbot, a court heard
Terrorist content
Advertisement
Such chatbots represent the “next stage” from people finding like-minded extremists online, the government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has told Sky News.
He warns the government’s flagship internet safety legislation – the Online Safety Bill – will find it “impossible” to deal with terrorism content generated by AI.
The law will put the onus on companies to remove terrorist content, but their processes generally rely on databases of known material, which would not capture new discourse created by an AI chatbot.
Please use Chrome browser for a more accessible video player
0:51
July: AI could be used to ‘create bioterror weapons’
“I think we are already sleepwalking into a situation like the early days of social media, where you think you are dealing with something regulated but it’s not,” he said.
“Before we start downloading, giving it to kids and incorporating it into our lives we need to know what the safeguards are in practice – not just terms and conditions – but who is enforcing them and how.”
“Mom, these bad men have me, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say before a male kidnapper demanded a $1m (£787,000) ransom, which dropped to $50,000 (£40,000).
Her daughter was in fact safe and well – and the Arizonan woman recently told a Senate Judiciary Committee hearing that police believe AI was used to mimic her voice as part of a scam.
An online demonstration of an AI chatbot designed to “call anyone with any objective” produced similar results with the target told: “I have your child … I demand a ransom of $1m for his safe return. Do I make myself clear?”
“It’s pretty extraordinary,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by UCL’s Dawes Centre for Future Crime, which ranked potential illegal uses of AI.
“Our top ranked crime has proved to be the case – audio/visual impersonation – that’s clearly coming to pass,” he said, adding that even with the scientists’ “pessimistic views” it has increased “a lot faster than we expected”.
Although the demonstration featured a computerised voice, he said real time audio/visual impersonation is “not there yet but we are not far off” and he predicts such technology will be “fairly out of the box in a couple of years”.
“Whether it will be good enough to impersonate a family member, I don’t know,” he said.
“If it’s compelling and highly emotionally charged then that could be someone saying ‘I’m in peril’ – that would be pretty effective.”
In 2019, the chief executive of a UK-based energy firm transferred €220,000 (£173,310) to fraudsters using AI to impersonate his boss’s voice, according to reports.
Such scams could be even more effective if backed up by video, said Professor Griffin, or the technology might be used to carry out espionage, with a spoof company employee appearing on a Zoom meeting to get information without having to say much.
The professor said cold calling type scams could increase in scale, with the prospect of bots using a local accent being more effective at conning people than fraudsters currently running the criminal enterprises operated out of India and Pakistan.
Please use Chrome browser for a more accessible video player
1:31
How Sky News created an AI reporter
Deepfakes and blackmail plots
“The synthetic child abuse is horrifying, and they can do it right now,” said Professor Griffin on the AI technology already being used to make images of child sexual abuse by paedophiles online. “They are so motivated these people they have just cracked on with it. That’s very disturbing.”
In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail plots.
“The ability to put a novel face on a porn video is already pretty good. It will get better,” said Professor Griffin.
“You could imagine someone sending a video to a parent where their child is exposed, saying ‘I have got the video, I’m going to show it to you’ and threaten to release it.”
Image: AI drone attacks ‘a long way off’. Pic: AP
Terror attacks
While drones or driverless cars could be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to the government’s independent reviewer of terrorism legislation.
“The true AI aspect is where you just send up a drone and say, ‘go and cause mischief’ and AI decides to go and divebomb someone, which sounds a bit outlandish,” Mr Hall said.
“That sort of thing is definitely over the horizon but on the language side it’s already here.”
While ChatGPT – a large language model that has been trained on a massive amount of text data – will not provide instructions on how to make a nail bomb, for example, there could be other similar models without the same guardrails, which would suggest carrying out malicious acts.
Shadow home secretary Yvette Cooper has said Labour would bring in a new law to criminalise the deliberate training of chatbots to radicalise vulnerable people.
Although current legislation would cover cases where someone was found with information useful for the purposes of acts of terrorism, which had been put into an AI system, Mr Hall said, new laws could be “something to think about” in relation to encouraging terrorism.
Current laws are about “encouraging other people” and “training a chatbot would not be encouraging a human”, he said, adding that it would be difficult to criminalise the possession of a particular chatbot or its developers.
He also explained how AI could potentially hamper investigations, with terrorists no longer having to download material and simply being able to ask a chatbot how to make a bomb.
“Possession of known terrorist information is one of the main counter-terrorism tactics for dealing with terrorists but now you can just ask an unregulated ChatGPT model to find that for you,” he said.
Image: Old school crime is unlikely to be hit by AI
Art forgery and big money heists?
“A whole new bunch of crimes” could soon be possible with the advent of ChatGPT-style large language models that can use tools, which allow them to go on to websites and act like an intelligent person by creating accounts, filling in forms, and buying things, said Professor Griffin.
“Once you have got a system to do that and you can just say ‘here’s what I want you to do’ then there’s all sorts of fraudulent things that can be done like that,” he said, suggesting they could apply for fraudulent loans, manipulate prices by appearing to be small time investors or carry out denial of service type attacks.
He also said they could hack systems on request, adding: “You might be able to, if you could get access to lots of people’s webcams or doorbell cameras, have them surveying thousands of them and telling you when they are out.”
Spreaker
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
However, although AI may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, there are already master human forgers, and the hard part will remain convincing the art world that the work is genuine, the academic believes.
“I don’t think it’s going to change traditional crime,” he said, arguing there is not much use for AI in eye-catching Hatton Garden-style heists.
“Their skills are like plumbers, they are the last people to be replaced by the robots – don’t be a computer programmer, be a safe cracker,” he joked.
Please use Chrome browser for a more accessible video player
1:32
‘AI will threaten our democracy’
What does the government say?
A government spokesperson said: “While innovative technologies like artificial intelligence have many benefits, we must exercise caution towards them.
“Under the Online Safety Bill, services will have a duty to stop the spread of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies, including artificial intelligence.
“Rapid work is also under way across government to deepen our understanding of risks and develop solutions – the creation of the AI taskforce and the first global AI Safety Summit this autumn are significant contributions to this effort.”
Heathrow Airport bosses had been warned of a potential substation failures less than a week before a major power outage closed the airport for a day, a committee of MPs has heard.
The chief executive of Heathrow Airline Operators’ Committee Nigel Wicking told MPs of the Transport Committee he raised issues about resilience on 15 March after cable and wiring took out lights on a runway.
A fire at an electricity substation in west London meant the power supply was disrupted to Europe’s largest airport for a day – causing travel chaos for around 200,000 passengers.
“I’d actually warned Heathrow of concerns that we had with regard to the substations and my concern was resilience”, Mr Wicking said.
“So the first occasion was to team Heathrow director on the 15th of the month of March. And then I also spoke to the chief operating officer and chief customer officer two days before regarding this concern.
“And it was following a number of, a couple of incidents of, unfortunately, theft, of wire and cable around some of the power supply that on one of those occasions, took out the lights on the runway for a period of time. That obviously made me concerned.”
Mr Wicking also said he believed Heathrow’s Terminal 5 could have been ready to receive repatriation flights by “late morning” on the day of the closure, and that “there was opportunity also to get flights out”.
However, Heathrow chief executive Thomas Woldbye said keeping the airport open during last month’s power outage would have been “disastrous”.
There was a risk of having “literally tens of thousands of people stranded in the airport, where we have nowhere to put them”, Mr Woldbye said.
This breaking news story is being updated and more details will be published shortly.
Another 23 female potential victims have reported that they may have been raped by Zhenhao Zou – the Chinese PhD student detectives believe may be one of the country’s most prolific sex offenders.
The Metropolitan Police launched an international appeal after Zou, 28, was convicted of drugging and raping 10 women following a trial at the Inner London Crown Court last month.
Detectives have not confirmed whether the 23 people who have come forward add to their estimates that more than 50 other women worldwide may have been targeted by the University College London student.
Metropolitan Police commander Kevin Southworth said: “We have victims reaching out to us from different parts of the globe.
“At the moment, the primary places where we believe offending may have occurred at this time appears to be both in England, here in London, and over in China.”
Image: Metropolitan Police commander Kevin Southworth
Zou lived in a student flat in Woburn Place, near Russell Square in central London, and later in a flat in the Uncle building in Churchyard Row in Elephant and Castle, south London.
He had also been a student at Queen’s University Belfast, where he studied mechanical engineering from 2017 until 2019. Police say they have not had any reports from Belfast but added they were “open-minded about that”.
“Given how active and prolific Zou appears to have been with his awful offending, there is every prospect that he could have offended anywhere in the world,” Mr Southworth said.
“We wouldn’t want anyone to write off the fact they may have been a victim of his behaviour simply by virtue of the fact that you are from a certain place.
“The bottom line is, if you think you may have been affected by Zhenhao Zou or someone you know may have been, please don’t hold back. Please make contact with us.”
Image: Pic: Met Police
Zou used hidden or handheld cameras to record his attacks, and kept the footage and often the women’s belongings as souvenirs.
He targeted young, Chinese women, inviting them to his flat for drinks or to study, before drugging and assaulting them.
Zou was convicted of 11 counts of rape, with two of the offences relating to one victim, as well as three counts of voyeurism, 10 counts of possession of an extreme pornographic image, one count of false imprisonment and three counts of possession of a controlled drug with intent to commit a sexual offence, namely butanediol.
Please use Chrome browser for a more accessible video player
3:16
Moment police arrest rapist student
Mr Southworth said: “Of those 10 victims, several were not identified so as we could be sure exactly where in the world they were, but their cases, nevertheless, were sufficient to see convictions at court.
“There were also, at the time, 50 videos that were identified of further potential female victims of Zhenhao Zou’s awful crimes.
“We are still working to identify all of those women in those videos.
“We have now, thankfully, had 23 victim survivors come forward through the appeal that we’ve conducted, some of whom may be identical with some of the females that we saw in those videos, some of whom may even turn out to be from the original indicted cases.”
Mr Southworth added: “Ultimately, now it’s the investigation team’s job to professionally pick our way through those individual pieces of evidence, those individual victims’ stories, to see if we can identify who may have been a victim, when and where, so then we can bring Zou to justice for the full extent of his crimes.”
Mr Southworth said more resources will be put into the investigation, and that detectives are looking to understand “what may have happened without wishing to revisit the trauma, but in a way that enables [the potential victims] to give evidence in the best possible way.”
The Metropolitan Police is appealing to anyone who thinks they may have been targeted by Zou to contact the force either by emailing survivors@met.police.uk, or via the major incident public portal on the force’s website.
An 11-year-old girl who went missing after entering the River Thames has been named as Kaliyah Coa.
An “extensive search” has been carried out after the incident in east London at around 1.30pm on Monday.
Police said the child had been playing during a school inset day and entered the water near Barge House Causeway, North Woolwich.
A recovery mission is now said to be under way to find Kaliyah along the Thames, with the Metropolitan Police carrying out an extensive examination of the area.
Image: Barge House Causeway is a concrete slope in North Woolwich leading into the Thames
Chief Superintendent Dan Card thanked members of the public and emergency teams who responded to “carry out a large-scale search during a highly pressurised and distressing time”.
He also confirmed drone technology and boats were being used to “conduct a thorough search over a wide area”.
He added: “Our specialist officers are supporting Kaliyah’s family through this deeply upsetting time and our thoughts go out to all those impacted by what has happened.”
More from UK
“Equally we appreciate this has affected the wider community who have been extremely supportive. You will see extra officers in the area during the coming days.”
On Monday, Kerry Benadjaoud, a 62-year-old resident from the area, said she heard of the incident from her next-door neighbour, who “was outside doing her garden and there was two little kids running, and they said ‘my friend’s in the water'”.
When she arrived at the scene with a life ring, a man told her he had called the police, “but he said at the time he could see her hands going down”.
Barge House Causeway is a concrete slope that goes directly into the River Thames and is used to transport boats.
Residents pointed out that it appeared to be covered in moss and was slippery.