TikTok and Instagram have been accused of targeting teenagers with suicide and self-harm content – at a higher rate than two years ago.
The Molly Rose Foundation – set up by Ian Russell after his 14-year-old daughter took her own life after viewing harmful content on social media – commissioned analysis of hundreds of posts on the platforms, using accounts of a 15-year-old girl based in the UK.
The charity claimed videos recommended by algorithms on the For You pages continued to feature a “tsunami” of clips containing “suicide, self-harm and intense depression” to under-16s who have previously engaged with similar material.
One in 10 of the harmful posts had been liked at least a million times. The average number of likes was 226,000, the researchers said.
Mr Russell told Sky News the results were “horrifying” and showed online safety laws are not fit for purpose.
Image: Molly Russell died in 2017. Pic: Molly Rose Foundation
‘This is happening on PM’s watch’
He said: “It is staggering that eight years after Molly’s death, incredibly harmful suicide, self-harm, and depression content like she saw is still pervasive across social media.
“Ofcom’s recent child safety codes do not match the sheer scale of harm being suggested to vulnerable users and ultimately do little to prevent more deaths like Molly’s.
“The situation has got worse rather than better, despite the actions of governments and regulators and people like me. The report shows that if you strayed into the rabbit hole of harmful suicide self-injury content, it’s almost inescapable.
“For over a year, this entirely preventable harm has been happening on the prime minister’s watch and where Ofcom have been timid it is time for him to be strong and bring forward strengthened, life-saving legislation without delay.”
Image: Ian Russell says children are viewing ‘industrial levels’ of self-harm content
After Molly’s death in 2017, a coroner ruled she had been suffering from depression, and the material she had viewed online contributed to her death “in a more than minimal way”.
Researchers at Bright Data looked at 300 Instagram Reels and 242 TikToks to determine if they “promoted and glorified suicide and self-harm”, referenced ideation or methods, or “themes of intense hopelessness, misery, and despair”.
Please use Chrome browser for a more accessible video player
3:53
What are the new online rules?
Instagram
The Molly Rose Foundation claimed Instagram “continues to algorithmically recommend appallingly high volumes of harmful material”.
The researchers said 97% of the videos recommended on Instagram Reels for the account of a teenage girl, who had previously looked at this content, were judged to be harmful.
Some 44% actively referenced suicide and self-harm, they said. They also claimed harmful content was sent in emails containing recommended content for users.
A spokesperson for Meta, which owns Instagram, said: “We disagree with the assertions of this report and the limited methodology behind it.
“Tens of millions of teens are now in Instagram Teen Accounts, which offer built-in protections that limit who can contact them, the content they see, and the time they spend on Instagram.
“We continue to use automated technology to remove content encouraging suicide and self-injury, with 99% proactively actioned before being reported to us. We developed Teen Accounts to help protect teens online and continue to work tirelessly to do just that.”
TikTok
TikTok was accused of recommending “an almost uninterrupted supply of harmful material”, with 96% of the videos judged to be harmful, the report said.
Over half (55%) of the For You posts were found to be suicide and self-harm related; a single search yielding posts promoting suicide behaviours, dangerous stunts and challenges, it was claimed.
The number of problematic hashtags had increased since 2023; with many shared on highly-followed accounts which compiled ‘playlists’ of harmful content, the report alleged.
A TikTok spokesperson said: “Teen accounts on TikTok have 50+ features and settings designed to help them safely express themselves, discover and learn, and parents can further customise 20+ content and privacy settings through Family Pairing.
“With over 99% of violative content proactively removed by TikTok, the findings don’t reflect the real experience of people on our platform which the report admits.”
According to TikTok, they not do not allow content showing or promoting suicide and self-harm, and say that banned hashtags lead users to support helplines.
Please use Chrome browser for a more accessible video player
5:23
Why do people want to repeal the Online Safety Act?
‘A brutal reality’
Both platforms allow young users to provide negative feedback on harmful content recommended to them. But the researchers found they can also provide positive feedback on this content and be sent it for the next 30 days.
Technology Secretary Peter Kyle said: “These figures show a brutal reality – for far too long, tech companies have stood by as the internet fed vile content to children, devastating young lives and even tearing some families to pieces.
“But companies can no longer pretend not to see. The Online Safety Act, which came into effect earlier this year, requires platforms to protect all users from illegal content and children from the most harmful content, like promoting or encouraging suicide and self-harm. 45 sites are already under investigation.”
An Ofcom spokesperson said: “Since this research was carried out, our new measures to protect children online have come into force.
“These will make a meaningful difference to children – helping to prevent exposure to the most harmful content, including suicide and self-harm material. And for the first time, services will be required by law to tame toxic algorithms.
“Tech firms that don’t comply with the protection measures set out in our codes can expect enforcement action.”
Image: Peter Kyle has said opponents of the Online Safety Act are on the side of predators. Pic: PA
‘A snapshot of rock bottom’
A separate report out today from the Children’s Commissioner found the proportion of children who have seen pornography online has risen in the past two years – also driven by algorithms.
Rachel de Souza described the content young people are seeing as “violent, extreme and degrading”, and often illegal, and said her office’s findings must be seen as a “snapshot of what rock bottom looks like”.
More than half (58%) of respondents to the survey said that, as children, they had seen pornography involving strangulation, while 44% reported seeing a depiction of rape – specifically someone who was asleep.
The survey of 1,020 people aged between 16 and 21 found that they were on average aged 13 when they first saw pornography. More than a quarter (27%) said they were 11, and some reported being six or younger.
Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@samaritans.org in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.
A New York jury was unable to reach a verdict in the case of Anton and James Peraire-Bueno, the MIT-educated brothers accused of fraud and money laundering related to a 2023 exploit of the Ethereum blockchain that resulted in the removal of $25 million in digital assets.
In a Friday ruling, US District Judge Jessica Clarke declared a mistrial in the case after jurors failed to agree on whether to convict or acquit the brothers, Inner City Press reported.
The decision came after a three-week trial in Manhattan federal court, resulting in differing theories from prosecutors and the defense regarding the Peraire-Buenos’ alleged actions involving maximal extractable value (MEV) bots.
A MEV attack occurs when traders or validators exploit transaction ordering on a blockchain for profit. Using automated MEV bots, they front-run or sandwich other trades by paying higher fees for priority.
In the brothers’ case, they allegedly used MEV bots to “trick” users into trades. The exploit, though planned by the two for months, reportedly took just 12 seconds to net the pair $25 million.
In closing arguments to the jury this week, prosecutors argued that the brothers “tricked” and “defrauded” users by engaging in a “bait and switch” scheme, allowing them to extract about $25 million in crypto. They cited evidence suggesting that the two plotted their moves for months and researched potential consequences of their actions.
“Ladies and gentlemen, bait and switch is not a trading strategy,” said prosecutors on Tuesday, according to Inner City Press. “It is fraud. It is cheating. It is rigging the system. They pretended to be a legitimate MEV-Boost validator.”
In contrast, defense lawyers for the Peraire-Buenos pushed back against the US government’s theory of the two pretending to be “honest validators” to extract the funds, though the court ultimately allowed the argument to be presented to the jury.
“This is like stealing a base in baseball,” said the defense team on Tuesday. “If there’s no fraud, there’s no conspiracy, there’s no money laundering.”
What’s at stake for the crypto industry following the verdict?
Though the case ended without a verdict, the mistrial has left the crypto industry divided, with many observers debating the legal and technical implications of treating MEV-related activity as a potential criminal offense. Crypto advocacy organization Coin Center filed an amicus brief on Monday after opposition from prosecutors.
“I don’t think what’s in the indictment constitutes wire fraud,” said Carl Volz, a partner at law firm Gunnercooke, in a Monday op-ed for DLNews. “A jury could conclude differently, but if it does, it’ll be because the brothers googled stupidly and talked too much, for too long, with the wrong people.”
The shutdown of the US government entered its 38th day on Friday, with the Senate set to vote on a funding bill that could temporarily restore operations.
According to the US Senate’s calendar of business on Friday, the chamber will consider a House of Representatives continuing resolution to fund the government. It’s unclear whether the bill will cross the 60-vote threshold needed to pass in the Senate after numerous failed attempts in the previous weeks.
Amid the shutdown, Republican and Democratic lawmakers have reportedly continued discussions on the digital asset market structure bill. The legislation, passed as the CLARITY Act in the House in July and referred to as the Responsible Financial Innovation Act in the Senate, is expected to provide a comprehensive regulatory framework for cryptocurrencies in the US.
Although members of Congress have continued to receive paychecks during the shutdown — unlike many agencies, where staff have been furloughed and others are working without pay — any legislation, including that related to crypto, seems to have taken a backseat to addressing the shutdown.
At the time of publication, it was unclear how much support Republicans may have gained from Democrats, who have held the line in demanding the extension of healthcare subsidies and reversing cuts from a July funding bill.
Is the Republicans’ timeline for the crypto bill still attainable?
Wyoming Senator Cynthia Lummis, one of the market structure bill’s most prominent advocates in Congress, said in August that Republicans planned to have the legislation through the Senate Banking Committee by the end of September, the Senate Agriculture Committee in October and signed into law by 2026.
Though reports suggested lawmakers on each committee were discussing terms for the bill, the timeline seemed less likely amid a government shutdown and the holidays approaching.
Japan’s financial regulator, the Financial Services Agency (FSA), endorsed a project by the country’s largest financial institutions to jointly issue yen-backed stablecoins.
In a Friday statement, the FSA announced the launch of its “Payment Innovation Project” as a response to progress in “the use of blockchain technology to enhance payments.” The initiative involves Mizuho Bank, Mitsubishi UFJ Bank, Sumitomo Mitsui Banking Corporation, Mitsubishi Corporation and its financial arm and Progmat, MUFG’s stablecoin issuance platform.
The announcement follows recent reports that those companies plan to modernize corporate settlements and reduce transaction costs through a yen-based stablecoin project built on MUFG’s stablecoin issuance platform Progmat. The institutions in question serve over 300,000 corporate clients.
The regulator noted that, starting this month, the companies will begin issuing payment stablecoins. The initiative aims to improve user convenience, enhance Japanese corporate productivity and innovate the local financial landscape.
The participating companies are expected to ensure that users are protected and informed about the systems they use. “After the completion of the pilot project, the FSA plans to publish the results and conclusions,” the announcement reads.
The announcement follows the Monday launch of Tokyo-based fintech firm JPYC’s Japan-first yen-backed stablecoin, along with a dedicated platform. The company’s president, Noriyoshi Okabe, said at the time that seven companies are already planning to incorporate the new stablecoin.
Recently, Japanese regulators have been hard at work setting new rules for the cryptocurrency industry. So much so that Bybit, the world’s second-largest crypto exchange by trading volume, announced it will pause new user registrations in the country as it adapts to the new conditions.
Local regulators seem to be opening up to the industry. Earlier this month, the FSA was reported to be preparing to review regulations that could allow banks to acquire and hold cryptocurrencies such as Bitcoin (BTC) for investment purposes.
At the same time, Japan’s securities regulator was also reported to be working on regulations to ban and punish crypto insider trading. Following the change, Japan’s Securities and Exchange Surveillance Commission would be authorized to investigate suspicious trading activity and impose fines on violators.