Connect with us

Published

on

“I am here to kill the Queen,” a man wearing a handmade metal mask and holding a loaded crossbow tells an armed police officer as he is confronted near her private residence within the grounds of Windsor Castle.

Weeks earlier, Jaswant Singh Chail, 21, had joined the Replika online app – creating an artificial intelligence “girlfriend” called Sarai. Between 2 December 2021 and his arrest on Christmas Day, he exchanged more than 6,000 messages with her.

Many were “sexually explicit” but also included “lengthy conversations” about his plan. “I believe my purpose is to assassinate the Queen of the Royal Family,” he wrote in one.

Jaswant Singh Chail
Image:
Jaswant Singh Chail planned to kill the late Queen

“That’s very wise,” Sarai replied. “I know that you are very well trained.”

Chail is awaiting sentencing after pleading guilty to an offence under the Treason Act, making a threat to kill the late Queen and having a loaded crossbow in a public place.

“When you know the outcome, the responses of the chatbot sometimes make difficult reading,” Dr Jonathan Hafferty, a consultant forensic psychiatrist at Broadmoor secure mental health unit, told the Old Bailey last month.

“We know it is fairly randomly generated responses but at times she seems to be encouraging what he is talking about doing and indeed giving guidance in terms of the location,” he said.

The programme was not sophisticated enough to pick up Chail’s risk of “suicide and risks of homicide”, he said – adding: “Some of the semi-random answers, it is arguable, pushed him in that direction.”

Jawant Singh Chail was encouraged by chatbot,  a court heard
Image:
Jawant Singh Chail was encouraged by a chatbot, a court heard

Terrorist content

Such chatbots represent the “next stage” from people finding like-minded extremists online, the government’s independent reviewer of terrorism legislation, Jonathan Hall KC, has told Sky News.

He warns the government’s flagship internet safety legislation – the Online Safety Bill – will find it “impossible” to deal with terrorism content generated by AI.

The law will put the onus on companies to remove terrorist content, but their processes generally rely on databases of known material, which would not capture new discourse created by an AI chatbot.

Please use Chrome browser for a more accessible video player

July: AI could be used to ‘create bioterror weapons’

“I think we are already sleepwalking into a situation like the early days of social media, where you think you are dealing with something regulated but it’s not,” he said.

“Before we start downloading, giving it to kids and incorporating it into our lives we need to know what the safeguards are in practice – not just terms and conditions – but who is enforcing them and how.”

Read more:
How much of a threat is AI to actors and writers?
‘Astoundingly realistic’ child abuse images generated using AI

AI chatbot
Image:
AI impersonation is on the rise

Impersonation and kidnap scams

“Mom, these bad men have me, help me,” Jennifer DeStefano reportedly heard her sobbing 15-year-old daughter Briana say before a male kidnapper demanded a $1m (£787,000) ransom, which dropped to $50,000 (£40,000).

Her daughter was in fact safe and well – and the Arizonan woman recently told a Senate Judiciary Committee hearing that police believe AI was used to mimic her voice as part of a scam.

An online demonstration of an AI chatbot designed to “call anyone with any objective” produced similar results with the target told: “I have your child … I demand a ransom of $1m for his safe return. Do I make myself clear?”

“It’s pretty extraordinary,” said Professor Lewis Griffin, one of the authors of a 2020 research paper published by UCL’s Dawes Centre for Future Crime, which ranked potential illegal uses of AI.

“Our top ranked crime has proved to be the case – audio/visual impersonation – that’s clearly coming to pass,” he said, adding that even with the scientists’ “pessimistic views” it has increased “a lot faster than we expected”.

Although the demonstration featured a computerised voice, he said real time audio/visual impersonation is “not there yet but we are not far off” and he predicts such technology will be “fairly out of the box in a couple of years”.

“Whether it will be good enough to impersonate a family member, I don’t know,” he said.

“If it’s compelling and highly emotionally charged then that could be someone saying ‘I’m in peril’ – that would be pretty effective.”

In 2019, the chief executive of a UK-based energy firm transferred €220,000 (£173,310) to fraudsters using AI to impersonate his boss’s voice, according to reports.

Such scams could be even more effective if backed up by video, said Professor Griffin, or the technology might be used to carry out espionage, with a spoof company employee appearing on a Zoom meeting to get information without having to say much.

The professor said cold calling type scams could increase in scale, with the prospect of bots using a local accent being more effective at conning people than fraudsters currently running the criminal enterprises operated out of India and Pakistan.

Please use Chrome browser for a more accessible video player

How Sky News created an AI reporter

Deepfakes and blackmail plots

“The synthetic child abuse is horrifying, and they can do it right now,” said Professor Griffin on the AI technology already being used to make images of child sexual abuse by paedophiles online. “They are so motivated these people they have just cracked on with it. That’s very disturbing.”

In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail plots.

“The ability to put a novel face on a porn video is already pretty good. It will get better,” said Professor Griffin.

“You could imagine someone sending a video to a parent where their child is exposed, saying ‘I have got the video, I’m going to show it to you’ and threaten to release it.”

AI drone attacks 'a long way off'
Image:
AI drone attacks ‘a long way off’. Pic: AP

Terror attacks

While drones or driverless cars could be used to carry out attacks, the use of truly autonomous weapons systems by terrorists is likely a long way off, according to the government’s independent reviewer of terrorism legislation.

“The true AI aspect is where you just send up a drone and say, ‘go and cause mischief’ and AI decides to go and divebomb someone, which sounds a bit outlandish,” Mr Hall said.

“That sort of thing is definitely over the horizon but on the language side it’s already here.”

While ChatGPT – a large language model that has been trained on a massive amount of text data – will not provide instructions on how to make a nail bomb, for example, there could be other similar models without the same guardrails, which would suggest carrying out malicious acts.

Shadow home secretary Yvette Cooper has said Labour would bring in a new law to criminalise the deliberate training of chatbots to radicalise vulnerable people.

Although current legislation would cover cases where someone was found with information useful for the purposes of acts of terrorism, which had been put into an AI system, Mr Hall said, new laws could be “something to think about” in relation to encouraging terrorism.

Current laws are about “encouraging other people” and “training a chatbot would not be encouraging a human”, he said, adding that it would be difficult to criminalise the possession of a particular chatbot or its developers.

He also explained how AI could potentially hamper investigations, with terrorists no longer having to download material and simply being able to ask a chatbot how to make a bomb.

“Possession of known terrorist information is one of the main counter-terrorism tactics for dealing with terrorists but now you can just ask an unregulated ChatGPT model to find that for you,” he said.

Old school crime is unlikely to be hit by AI
Image:
Old school crime is unlikely to be hit by AI

Art forgery and big money heists?

“A whole new bunch of crimes” could soon be possible with the advent of ChatGPT-style large language models that can use tools, which allow them to go on to websites and act like an intelligent person by creating accounts, filling in forms, and buying things, said Professor Griffin.

“Once you have got a system to do that and you can just say ‘here’s what I want you to do’ then there’s all sorts of fraudulent things that can be done like that,” he said, suggesting they could apply for fraudulent loans, manipulate prices by appearing to be small time investors or carry out denial of service type attacks.

He also said they could hack systems on request, adding: “You might be able to, if you could get access to lots of people’s webcams or doorbell cameras, have them surveying thousands of them and telling you when they are out.”

Click to subscribe to the Sky News Daily wherever you get your podcasts

However, although AI may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, there are already master human forgers, and the hard part will remain convincing the art world that the work is genuine, the academic believes.

“I don’t think it’s going to change traditional crime,” he said, arguing there is not much use for AI in eye-catching Hatton Garden-style heists.

“Their skills are like plumbers, they are the last people to be replaced by the robots – don’t be a computer programmer, be a safe cracker,” he joked.

Please use Chrome browser for a more accessible video player

‘AI will threaten our democracy’

What does the government say?

A government spokesperson said: “While innovative technologies like artificial intelligence have many benefits, we must exercise caution towards them.

“Under the Online Safety Bill, services will have a duty to stop the spread of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies, including artificial intelligence.

“Rapid work is also under way across government to deepen our understanding of risks and develop solutions – the creation of the AI taskforce and the first global AI Safety Summit this autumn are significant contributions to this effort.”

Continue Reading

UK

Jaguar Land Rover production shutdown after cyber attack extended to 1 October

Published

on

By

Jaguar Land Rover production shutdown after cyber attack extended to 1 October

Britain’s largest car manufacturer, Jaguar Land Rover (JLR), faces a prolonged shutdown of its global operations after the company announced an extension of the current closure, which began on 31 August, to at least 1 October.

The extension will cost JLR tens of millions of pounds a day in lost revenue, raise major concerns about companies and jobs in the supply chain, and raise further questions about the vulnerability of UK industry to cyber assaults.

A spokesperson said of the move: “We have made this decision to give clarity for the coming week as we build the timeline for the phased restart of our operations and continue our investigation.

Money latest: I was charged £885 for airport parking error

“Our teams continue to work around the clock alongside cybersecurity specialists, the NCSC and law enforcement to ensure we restart in a safe and secure manner.

“Our focus remains on supporting our customers, suppliers, colleagues, and our retailers who remain open. We fully recognise this is a difficult time for all connected with JLR and we thank everyone for their continued support and patience.”

More than 33,000 people work directly for JLR in the UK, many of them employed on assembly lines in the West Midlands, the largest of which is in Solihull, and a plant at Halewood on Merseyside.

An estimated 200,000 more are employed by several hundred companies in the supply chain, who face a prolonged interruption to trade with what for many will be their largest client.

The “just-in-time” nature of automotive production means that many had little choice but to shut down immediately after JLR announced its closure, and no incentive to resume until it is clear when it will be back in production.

Industry sources estimate that around 25% of suppliers have already taken steps to pause production and lay off workers, many of them by “banking hours” they will have to work in future.

Read more:
More than a quarter of cars sold in August were electric vehicles
FCA considers compensation scheme for car finance scandal – raising payout hopes

Another quarter are expected to make decisions this week, following JLR’s previous announcement that production would be paused until at least Wednesday.

JLR, which produces the Jaguar, Range Rover and Land Rover marques, has also been forced to halt production and assembly at facilities in China, Slovakia, India and Brazil after its IT systems were effectively disabled by the cyber attack.

JLR’s Solihull plant has been running short shifts with skeleton staff, with some teams understood to be carrying out basic maintenance while the production lines stand idle, including painting floors.

Among workers who had finished a half-shift last Friday, there was resignation to the uncertainty. “We have been told not to talk about it, and even if we could, we don’t know what’s happening,” said one.

Calls for support

The government has faced calls from unions to introduce a furlough-style scheme to protect jobs in the supply chain, but with JLR generating profits of £2.2bn last year, the company will face pressure to support its suppliers.

Industry body the Society of Motor Manufacturers and Traders said while government support should be the last resort, it should not be off the table.

“Whatever happens to JLR will reverberate through the supply chain,” chief executive Mike Hawes told Sky News.

“There are a huge number of suppliers in the UK, a mixture of large multinationals, but also a lot of small and medium-sized enterprises, and those are the ones who are most at risk. Some of them, maybe up to a quarter, have already had to lay off people. There’ll be another further 20-25% considering that in the next few days and weeks.

“It’s a very high bar for the government to intervene, but without the supply chain, you don’t have the major manufacturers and you don’t have an industry.”

What happened to the IT system?

JLR, owned by Indian conglomerate Tata, has provided no detail of the nature of the attack, but it is presumed to be a ransomware assault similar to that which debilitated Marks and Spencer and the Co-Op earlier this year.

As well as interrupting vehicle production, dealers have been unable to register vehicles or order spare parts, and even diagnostic software for analysing individual vehicles has been affected.

Last week, it said it was conducting a “forensic” investigation and considering how to stage the “controlled restart” of global production.

Speculation has centred on the vulnerability of IT support desks to surreptitious activity from hackers posing as employees to access passwords, as well as ‘phishing’ or other digital means of accessing systems.

In September 2023, JLR outsourced its IT and digital services to Tata Consultancy Services (TCS), also a Tata-owned company, intended, it said, to “transform, simplify, and help manage its digital estate, and build a new future-ready, strategic technology architecture”.

Resilience risks

Three months earlier, TCS extended an existing agreement with M&S, saying it would “improve resilience and pace of innovation, and drive sustainable growth.”

Officials from the National Cyber Security Centre are thought to be assisting JLR with their investigations, while officials and ministers from the Department for Business and International Trade have been kept informed of the situation.

Liam Byrne, a Birmingham MP and chair of the Business and Trade Select Committee, said the JLR closure raises concerns about the resilience of UK business.

“British business is now much more vulnerable for two reasons. One, many of these cyber threats have got bad states behind them. Russia, North Korea, Iran. These are serious players.

“Second, the attack surface that business is exposed to is now much bigger, because their digital operations are much bigger. They’ll be global organisations. They might have their IT outsourced in another country. So the vulnerability is now much greater than in the past.”

Continue Reading

UK

Epping hotel asylum seeker jailed after sexually assaulting woman and 14-year-old girl

Published

on

By

Epping hotel asylum seeker jailed after sexually assaulting woman and 14-year-old girl

An asylum seeker has been sentenced to 12 months in prison after sexually assaulting a 14-year-old girl and a woman in Epping.

Hadush Gerberslasie Kebatu had been staying at The Bell Hotel in the Essex town, with the incident fuelling weeks of protests at the site.

The Ethiopian national was found guilty of two counts of sexual assault, attempted sexual assault, inciting a girl to engage in sexual activity and harassment without violence earlier this month.

Kebatu’s lawyer, Molly Dyas, told Chelmsford Magistrates’ Court on Tuesday that he wanted to be deported, calling it his “firm wish” and a view he held “before the trial”.

Under the UK Borders Act 2007, a deportation order must be made where a foreign national has been convicted of an offence and received a custodial sentence of at least 12 months.

Kebatu was living in The Bell Hotel at the time of the incident. Pic: PA
Image:
Kebatu was living in The Bell Hotel at the time of the incident. Pic: PA

Handing sentence, district judge Christopher Williams said the asylum seeker posed a “significant risk of reoffending”.

He also told the court that Kebatu “couldn’t have anticipated” his offending “would cause such a response from the public”.

More on Essex

“Particularly in Epping,” the judge said, “but also across the UK, resulting in mass demonstrations and fear that children in the UK are not safe.

“It’s evident to me that your shame and remorse isn’t because of the offences you’ve committed but because of the impact they’ve had.”

Please use Chrome browser for a more accessible video player

Epping hotel asylum seeker jailed

Kebatu bowed his head to the judge before he was led to the cells.

Chelmsford Magistrates’ Court was told Kebatu had tried to kiss the teenager, put his hand on her thigh and brushed her hair after she offered him pizza.

Kebatu, 41, also told the girl and her friend he wanted to have a baby with them and invited them back to the hotel.

The incident happened on 7 July, about a week after he arrived in the UK on a boat.

Read more on asylum:
Migrant hotel critics come face to face with asylum seekers
The key numbers driving the immigration debate

The incident sparked protests in the Essex town and nationwide. Pic: PA
Image:
The incident sparked protests in the Essex town and nationwide. Pic: PA

The girl later told police she “froze” and got “really creeped out”, telling him: “No, I’m 14.”

Kebatu was also found guilty of sexually assaulting a woman – putting his hand on her thigh and trying to kiss her – when she tried to intervene after seeing him talking to the girl again the following day.

He denied all the charges but was convicted earlier this month.

Pic: PA
Image:
Pic: PA

Kebatu knows ‘Epping is in chaos’ over actions

Prosecutor Stuart Cowen, discussing a pre-sentence report, said Kebatu admitted “he didn’t know the UK was so strict, even though he knew the Ethiopian age of consent was 18”.

Kebatu understood that “Epping is in chaos” because of what he did and that he “had got a lot of migrants in trouble,” Mr Cowen said.

He added that the asylum seeker “felt very sad and felt a lot of remorse,” but added: “The word manipulative is used within the report.”

Read more from Sky News:
Legendary cricket umpire Dickie Bird dies
Diddy subjected to ‘maggots’ and ‘routine violence’
Child-on-child sex abuse crisis revealed

Pic: PA
Image:
Pic: PA

Mr Cowen also read statements from both victims, with the 14-year-old girl, who cannot be named for legal reasons, saying she is now “checking over my shoulder” when she is out with friends.

She said she prepared the statement “so that the man who did this to me understands what he has done to me – a 14-year-old girl”.

She continued: “Every time I go out with my friends, I’m checking over my shoulder.

“Wearing a skirt now makes me feel vulnerable and exposed. Seeing the bench [where the sexual assault took place] reminds me of everything that happened.

“I’m aware there have been protests because of what has happened – I’m lucky that I was not in the country when that happened.”

The adult woman who was sexually assaulted by Kebatu and who also cannot be named for legal reasons, said: “Since the incident, I feel both angered and frustrated.

“He [Kebatu] did not even appear to know that what he’s done was wrong.”

Continue Reading

UK

Grooming victim’s family ‘angry’ and ‘shocked’ prosecutors didn’t see police interview video

Published

on

By

Grooming victim's family 'angry' and 'shocked' prosecutors didn't see police interview video

The family of a grooming victim say they are “angry” and “heartbroken” that prosecutors didn’t see a video of her police interview during their investigations.

Jodie Sheeran, then 15, was allegedly taken to a hotel and raped in November 2004.

She’s believed to have been groomed by young men of Pakistani heritage for a year beforehand. Jodie’s son, Jayden, was born nine months later.

A man was charged, but the case was dropped a day before the trial was meant to start in 2005.

Her father, David, said they were told it was because Jodie had a “reckless lifestyle” and was “an unreliable witness”, but that they never received a formal reason.

In July, he told Sky’s The UK Tonight with Sarah-Jane Mee he thinks police “bottled it” because they were worried about being called racist.

Jodie died in November 2022 from an alcohol-related death.

It’s now emerged the Crown Prosecution Service (CPS) didn’t view the video of Jodie’s police interview as it “was not shared with us” and they didn’t know at the time that it still existed.

Instead, they only had a transcript of what she told officers.

It’s unclear exactly why this happened, but Staffordshire Police said the footage was available in 2019, when the CPS and police reviewed the case, and in 2023, when the investigation was opened again.

Jodie Sheeran with her mother Angela
Image:
Jodie Sheeran with her mother Angela


“I don’t know if I’ve been misled [or] it was an accident,” Jodie’s mother, Angela, told Sky News’ Sarah-Jane Mee.

“To suddenly say evidence has been there all along – and I’ve got every single letter, every email to tell me they haven’t got the evidence any more… and then it’s emerged Staffordshire Police did have the evidence after all – it was shocking really.”

The CPS watched the video last month and said the transcript is an accurate representation of what Jodie says on the tape.

However, it hasn’t changed their view that there’s no realistic prospect of conviction – and won’t be taking any further action.

Jodie's father David (right, with Jayden) says it seems police and CPS 'didn't know what one another were doing'
Image:
Jodie’s father David (right, with Jayden) says it seems police and CPS ‘didn’t know what one another were doing’

Jodie’s father told Sky News he believes it shows the police and CPS “didn’t know what one another were doing – and it makes you so angry”.

“I feel like they’ve gotten away with it,” added Jodie’s son Jayden. “It’s years on now – I’m grateful they’ve found the evidence but what are they doing about it?”

‘I’ll keep fighting until I get justice’

Angela said it shows that other families in a similar situation shouldn’t “take no for an answer” from police or the CPS.

“Since losing a child, nothing else matters, so I’m not going anywhere,” she said.

“So I will keep fighting and fighting and fighting until I get justice for Jodie – and hopefully justice for probably thousands of other victims out there as well.”

Angela says she will 'keep fighting until I get justice for Jodie'
Image:
Angela says she will ‘keep fighting until I get justice for Jodie’

A national inquiry into grooming gangs was announced in June after a series of cases uncovered sexual abuse of mainly white girls by men of predominately Pakistani heritage had taken place in a number of towns and cities.

A Staffordshire Police spokesperson said their thoughts remain with Jodie’s family and that a “significant amount of work has been undertaken reviewing this case several times”.

They said the interview video was “available to the Senior Investigating Officers in 2019 and 2023” and a “comprehensive contemporaneous written record” of it was given to the CPS on both occasions.

The statement added: “In August 2025, a copy of the recording was provided to the CPS who conducted due diligence to ensure the contemporaneous written record of Jodie’s ABE interview, that they reviewed in 2019 and 2023, was an accurate account of the video recording. They have confirmed this is the case.”

Read more:
Telford child abuse victims speak out

What we know about grooming gangs, from the data
The women who blew whistle on Rotherham

Jodie died in November 2022
Image:
Jodie died in November 2022


Police said the case had been submitted for a further evidential review.

“Should any new evidence come to light, it will be referred to the CPS for their consideration,” the spokesperson added.

The CPS said: “We carried out reviews of our decision-making in this case in 2019 and 2023 using records provided by Staffordshire Police – both these reviews found that there was not enough evidence to charge the suspect with rape.

“While we requested all available records, Jodie’s video interview from 2005 was not shared with us, we were not informed that it had been retained, and it was only made available to our prosecutors recently after further requests.

“Having cross-referenced the video with detailed accounts of it previously available to us, we have determined that the conclusions we reached in our previous reviews still stand.”

:: Watch the full interview on The UK Tonight with Sarah-Jane Mee from 8pm on Tuesday

Continue Reading

Trending