TikTok has launched the ability for parents and guardians to filter out videos they don’t want their children to see.
The feature is an addition to the app’s family pairing functionality, which allows adults to link their account to their teenager’s for control of settings like screen time limits.
TikTok‘s users could already dictate content filters for themselves, allowing them to avoid videos associated with specific words or hashtags.
Julie de Bailliencourt, global head of product policy, told Sky News that giving parents the ability to set them up was more with user safety in mind.
However, teenagers will only be alerted to their parents’ selected filters at first and can simply not opt-in.
“We wanted to make sure we had the right balance of pragmatism and transparency to enable families to choose the best experience for their own family because every family is different,” said Ms De Bailliencourt.
“We also wanted to make sure we respect young people’s right to participate. So by default, teens can view the keywords their parent or caregiver has added.”
A late amendment to the proposed legislation last week, which aims to regulate online content to keep people safe, could see coroners and bereaved parents granted access to data on the phones of deceased children.
TikTok would not be drawn on the specific amendment, only that it is working closely with the government on the development of the legislation.
The platform has come under mounting pressure over its links to China, as it’s owned by Beijing’s ByteDance, and earlier this year it was banned from UK government phones.
Please use Chrome browser for a more accessible video player
1:43
TikTok ‘will not be influenced by any govt’
Young users to help form moderation guidelines
TikTok has also announced the formation of a global youth council, made up of young people who use the platform, which will launch later this year.
It will operate similarly to TikTok’s content and safety advisory councils, made up of independent experts who help inform its approach to moderation.
Meanwhile, the company said there had been no change to its policies around election misinformation after rival YouTube’s decision to stop deleting false claims that the 2020 US vote was stolen.
The Google-owned platform announced the change earlier this month – a reversal of a policy that had been in place since after the last presidential election, which Donald Trump wrongly claims was illegitimate.