“I am here to kill the Queen,” a man wearing a handmade metal mask and holding a loaded crossbow tells an armed police officer as he is confronted near her private residence within the grounds of Windsor Castle.
Weeks earlier, Jaswant Singh Chail, 21, had joined the Replika online app – creating an artificial intelligence “girlfriend” called Sarai. Between 2 December 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.
Many were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe my purpose is to assassinate the Queen of the Royal Family,” he wrote in one.
Image: Jaswant Singh Chail planned to kill the late Queen
“That’s very wise,” Sarai replied. “I know that you are very well trained.”
Chail is awaiting sentencing after pleading guilty to an offence under the Treason Act, making a threat to kill the late Queen and having a loaded crossbow in a public place.
“When you know the outcome, the responses of the chatbot sometimes make difficult reading,” Dr Jonathan Hafferty, a consultant forensic psychiatrist at Broadmoor secure mental health unit, told the Old Bailey last month.
“We know it is fairly randomly generated responses but at times she seems to be encouraging what he is talking about doing and indeed giving guidance in terms of the location,” he said.
The programme was not sophisticated enough to pick up Chail’s risk of “suicide and risks of homicide”, he said – adding: “Some of the semi-random answers, it is arguable, pushed him in that direction.”
Image: Jawant Singh Chail was encouraged by a chatbot, a court heard
Terrorist content
Advertisement
Such chatbots represent the “next stage” from people finding like-minded extremists online, the government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has told Sky News.
He warns the government’s flagship internet safety legislation – the Online Safety Bill – will find it “impossible” to deal with terrorism content generated by AI.
The law will put the onus on companies to remove terrorist content, but their processes generally rely on databases of known material, which would not capture new discourse created by an AI chatbot.
Please use Chrome browser for a more accessible video player
0:51
July: AI could be used to ‘create bioterror weapons’
“I think we are already sleepwalking into a situation like the early days of social media, where you think you are dealing with something regulated but it’s not,” he said.
“Before we start downloading, giving it to kids and incorporating it into our lives we need to know what the safeguards are in practice – not just terms and conditions – but who is enforcing them and how.”
“Mom, these bad men have me, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say before a male kidnapper demanded a $1m (£787,000) ransom, which dropped to $50,000 (£40,000).
Her daughter was in fact safe and well – and the Arizonan woman recently told a Senate Judiciary Committee hearing that police believe AI was used to mimic her voice as part of a scam.
An online demonstration of an AI chatbot designed to “call anyone with any objective” produced similar results with the target told: “I have your child … I demand a ransom of $1m for his safe return. Do I make myself clear?”
“It’s pretty extraordinary,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by UCL’s Dawes Centre for Future Crime, which ranked potential illegal uses of AI.
“Our top ranked crime has proved to be the case – audio/visual impersonation – that’s clearly coming to pass,” he said, adding that even with the scientists’ “pessimistic views” it has increased “a lot faster than we expected”.
Although the demonstration featured a computerised voice, he said real time audio/visual impersonation is “not there yet but we are not far off” and he predicts such technology will be “fairly out of the box in a couple of years”.
“Whether it will be good enough to impersonate a family member, I don’t know,” he said.
“If it’s compelling and highly emotionally charged then that could be someone saying ‘I’m in peril’ – that would be pretty effective.”
In 2019, the chief executive of a UK-based energy firm transferred €220,000 (£173,310) to fraudsters using AI to impersonate his boss’s voice, according to reports.
Such scams could be even more effective if backed up by video, said Professor Griffin, or the technology might be used to carry out espionage, with a spoof company employee appearing on a Zoom meeting to get information without having to say much.
The professor said cold calling type scams could increase in scale, with the prospect of bots using a local accent being more effective at conning people than fraudsters currently running the criminal enterprises operated out of India and Pakistan.
Please use Chrome browser for a more accessible video player
1:31
How Sky News created an AI reporter
Deepfakes and blackmail plots
“The synthetic child abuse is horrifying, and they can do it right now,” said Professor Griffin on the AI technology already being used to make images of child sexual abuse by paedophiles online. “They are so motivated these people they have just cracked on with it. That’s very disturbing.”
In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail plots.
“The ability to put a novel face on a porn video is already pretty good. It will get better,” said Professor Griffin.
“You could imagine someone sending a video to a parent where their child is exposed, saying ‘I have got the video, I’m going to show it to you’ and threaten to release it.”
Image: AI drone attacks ‘a long way off’. Pic: AP
Terror attacks
While drones or driverless cars could be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to the government’s independent reviewer of terrorism legislation.
“The true AI aspect is where you just send up a drone and say, ‘go and cause mischief’ and AI decides to go and divebomb someone, which sounds a bit outlandish,” Mr Hall said.
“That sort of thing is definitely over the horizon but on the language side it’s already here.”
While ChatGPT – a large language model that has been trained on a massive amount of text data – will not provide instructions on how to make a nail bomb, for example, there could be other similar models without the same guardrails, which would suggest carrying out malicious acts.
Shadow home secretary Yvette Cooper has said Labour would bring in a new law to criminalise the deliberate training of chatbots to radicalise vulnerable people.
Although current legislation would cover cases where someone was found with information useful for the purposes of acts of terrorism, which had been put into an AI system, Mr Hall said, new laws could be “something to think about” in relation to encouraging terrorism.
Current laws are about “encouraging other people” and “training a chatbot would not be encouraging a human”, he said, adding that it would be difficult to criminalise the possession of a particular chatbot or its developers.
He also explained how AI could potentially hamper investigations, with terrorists no longer having to download material and simply being able to ask a chatbot how to make a bomb.
“Possession of known terrorist information is one of the main counter-terrorism tactics for dealing with terrorists but now you can just ask an unregulated ChatGPT model to find that for you,” he said.
Image: Old school crime is unlikely to be hit by AI
Art forgery and big money heists?
“A whole new bunch of crimes” could soon be possible with the advent of ChatGPT-style large language models that can use tools, which allow them to go on to websites and act like an intelligent person by creating accounts, filling in forms, and buying things, said Professor Griffin.
“Once you have got a system to do that and you can just say ‘here’s what I want you to do’ then there’s all sorts of fraudulent things that can be done like that,” he said, suggesting they could apply for fraudulent loans, manipulate prices by appearing to be small time investors or carry out denial of service type attacks.
He also said they could hack systems on request, adding: “You might be able to, if you could get access to lots of people’s webcams or doorbell cameras, have them surveying thousands of them and telling you when they are out.”
Spreaker
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
However, although AI may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, there are already master human forgers, and the hard part will remain convincing the art world that the work is genuine, the academic believes.
“I don’t think it’s going to change traditional crime,” he said, arguing there is not much use for AI in eye-catching Hatton Garden-style heists.
“Their skills are like plumbers, they are the last people to be replaced by the robots – don’t be a computer programmer, be a safe cracker,” he joked.
Please use Chrome browser for a more accessible video player
1:32
‘AI will threaten our democracy’
What does the government say?
A government spokesperson said: “While innovative technologies like artificial intelligence have many benefits, we must exercise caution towards them.
“Under the Online Safety Bill, services will have a duty to stop the spread of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies, including artificial intelligence.
“Rapid work is also under way across government to deepen our understanding of risks and develop solutions – the creation of the AI taskforce and the first global AI Safety Summit this autumn are significant contributions to this effort.”
CCTV and police bodycam footage allegedly showing three police officers being assaulted at Manchester Airport has been played to jurors.
Mohammed Fahir Amaaz, 20, and his brother, Muhammad Amaad, 26, are said to have struck out after police were called to the airport on 23 July last year, following Amaaz allegedly headbutting a customer at a Starbucks in Terminal 2.
Minutes later, three police officers approached the defendants at the paystation in the terminal’s car park.
A jury at Liverpool Crown Court today watched CCTV footage from opposite angles, which captured what the prosecution says was a “high level of violence” being used by the siblings.
The prosecution says Amaaz resisted as officers tried to move him to arrest him, and Amaad then intervened.
Junior counsel Adam Birkby suggested Amaaz threw 10 punches, including one to the face of PC Lydia Ward, which knocked her to the floor.
His brother Amaad is then said to have aimed six punches at firearms officer PC Zachary Marsden.
Amaaz also allegedly kicked PC Marsden and struck firearms officer PC Ellie Cook twice with his elbow.
He is said to have punched PC Marsden from behind and had a hold of him, before PC Cook discharged her Taser.
Image: Mohammed Fahir Amaaz (left) and Muhammed Amaad (right) arrive at the court with their lawyer. Pic: PA
The bodycam and CCTV footage, submitted as evidence by the prosecution, allegedly shows the officers’ arrival in the Terminal 2 car park and their attempts to arrest the siblings, as well as their exchanges with them.
PC Ward can be heard saying “Oi, you b*****d” in footage from her bodycam, the prosecution evidence appears to show.
She then appears to fall to the floor and screams.
PC Cook, who is pointing her Taser at one of the defendants, then allegedly says: “Stay on the floor, stay on the floor whatever you do.”
“Get back, get back,” PC Ward appears to say.
The bodycam footage, shown to the jury by the prosecution, shows PC Marsden, who is also pointing his Taser, appear to approach the defendant who is lying on the ground and kick out at him.
Mr Birkby said: “Mr Amaaz, while prone, lifts his head towards the officers. PC Marsden kicks Mr Amaaz around the head area.
“PC Marsden stamps his foot towards the crown of Mr Amaaz’s head area but doesn’t appear to connect with Mr Amaaz.”
Amaaz denies three counts of assault occasioning actual bodily harm to the three police officers and one count of assault to Abdulkareem Ismaeil, the customer at Starbucks.
Amaad denies one count of assault occasioning actual bodily harm to PC Marsden.
A paramedic who secretly gave a pregnant woman an abortion drug during sex has been jailed for more than 10 years.
Stephen Doohan, 33, was married when he met the woman on holiday in Spain in 2021 and began a long-distance relationship.
The High Court in Glasgow heard how the victim travelled to Edinburgh in March 2023 to visit Doohan after learning she was pregnant.
During consensual sex, Doohan twice secretly administered the tablets which led to the woman suffering a miscarriage.
In May, Doohan pleaded guilty to sexual assault and causing the woman to have an abortion. He returned to the dock on Monday where he was jailed for 10 years and six months.
Lord Colbeck said Doohan caused “long-term psychological injury” to his victim.
The judge said: “You put her through considerable pain over a number of days and left her facing a lifetime of pain and loss.”
More on Edinburgh
Related Topics:
The court heard how the woman found tablets hidden under the mattress after she became suspicious over Doohan’s behaviour in bed.
Lord Colbeck said: “The complainer then carried out an internet search for abortion tablets and confronted you over your actions.”
After the woman fell ill, Doohan convinced her to lie to medics at the Royal Infirmary of Edinburgh amid fears he would be arrested if she told the truth.
The victim later attended another hospital with her sister and was told she was having a miscarriage.
The Crown Office and Procurator Fiscal Service (COPFS) said Doohan sent the woman gifts including perfume, socks, facial cleansing oil, money to get her hair done and bought tickets for them to attend a football match.
The woman complained to the Scottish Ambulance Service in May 2023, sparking an investigation.
The court heard that on 14 March 2023, the day the woman told Doohan she was pregnant, the paramedic used a work intranet to search for abortion drugs.
Lord Colbeck said: “You planned out what you did to your victim using resources available to you as a paramedic.”
In addition to his prison sentence, Doohan was also added to the sex offenders’ register and banned from contacting his victim.
Fiona Kirkby, procurator fiscal for high court sexual offences, said: “Stephen Doohan’s calculated and heinous actions caused the loss of the victim’s pregnancy, robbing her of plans she had for the future.
“He has now been held accountable for this fundamental breach of trust.
“While offences like this are thankfully rare, I hope this prosecution sends a clear message to all those who seek to inflict sexual harm towards women.
“Our thoughts remain with the victim, who must be commended for reporting her experience and seeking justice.
“We recognise that reporting sexual offending can be difficult but would urge anyone affected to come forward and seek support when they feel ready to do so.”
The Scottish Ambulance Service branded it an “appalling case”.
A spokesperson added: “We recognise the courage it must have taken for the victim to come forward and speak out.
“As soon as we learned of these very serious allegations and charges, we immediately took action, providing ongoing support to her whilst liaising with Police Scotland throughout the investigation.
“We know nothing will change what has happened to the victim and all we can hope is this sentence provides some comfort to them.”
UK farmers have “nothing more to give” as they fear the government will use agriculture to further reduce US tariffs in a trade deal with the White House.
The UK is trying to reduce steel tariffs to zero, from a current reduced rate of 25%, but Downing Street refused to confirm if it was confident ahead of Donald Trump’s deadline of 9 July.
Tom Bradshaw, president of the National Farmers’ Union (NFU), said UK agriculture had already been used to reduce Trump-imposed tariffs on cars but any other concessions would have serious repercussions for farmers, food security and the UK’s high animal welfare standards.
He told Sky News: “It just feels like we, as the agricultural sector, had to shoulder the responsibility to reduce the tariffs on cars from 25%.
“We can’t do it anymore, we have nothing more to give.
“It’s clear the steel quotas and tariffs aren’t sorted yet, so we just want to be very clear with the government: if they’re sitting around the negotiating table – which we understand they are – they can’t expect agriculture to give any more.”
More on Donald Trump
Related Topics:
Image: Tom Bradshaw, the head of the NFU, said farmers cannot give any more
‘Massively undermine our standards’
Since 30 June, the US has been able to import 13,000 tonnes of hormone-free British beef without tariffs under a deal made earlier this year, which farmers feel was to reduce the car import levy Mr Trump imposed.
The UK was also given tariff-free access to 1.4bn litres of US ethanol, which farmers say will put the UK’s bioethanol and associated sectors under pressure.
Allowing lower US food standards would “massively undermine our standards” and would mean fewer sales to the European Union where food standards are also high, Mr Bradshaw said.
It would leave British farmers competing on a playing field that is “anything but fair”, he said, because US food can be produced – and sold – much cheaper due to low welfare which could see a big reduction in investment in UK farms, food security and the environment.
Please use Chrome browser for a more accessible video player
5:08
Can the UK avoid steel tariffs?
‘The US will push hard for more access’
He said the US narrative has always suggested they want access to British agriculture products “as a start and they’ll negotiate for more”.
“The narrative from the White House on 8 May, when a US-UK trade deal was announced, was all about further access to our agriculture products – it was very different to what our government was saying,” he added.
“So far, the UK has stood firm and upheld our higher welfare standards, but the US will push very hard to have further access.
“No country in the world has proved they can reduce the 10% tariffs further.”
Image: US poultry welfare is lower than the UK, with much more intensive farming that means the meat has to be washed with antimicrobials. Pic: AP
US ‘will target poultry and pork’
The Essex farmer said he expects the US to push “very hard” to get the UK to lower its standards on poultry and pork, specifically.
US poultry is often washed with antimicrobials, including chlorine, in an attempt to wash off high levels of bacteria caused by poor hygiene, antibiotic use and low animal welfare conditions not allowed in UK farming.
US pig rearing methods are also quite different, with intensive farming and the use of feed additive ractopamine legal, with both banned in the UK.
A government spokesperson told Sky News: “We regularly speak to businesses across the UK to understand the impact of tariffs and will only ever act in the national interest.
“Our Plan for Change has delivered a deal which will open up exclusive access for UK beef farmers to the US market for the first time ever and all agricultural imports coming to the UK will have to meet our high SPS (sanitary and phytosanitary) standards.”