The UK’s Online Safety Act, effective from May 2024, mandates social media firms to modify algorithms promoting harmful content to protect children online. Under Ofcom’s supervision, platforms must verify user ages or adjust content to ban sensitive topics like pornography, suicide, self-harm, and more for young users.
The UK’s Online Safety Act, as of May 2024, has introduced stringent new measures to protect children online by mandating social media firms to modify algorithms that promote harmful content. Under the supervision of Ofcom, the legislation mandates that internet platforms must either verify users’ ages or alter their content to ensure safety for child users. This includes a ban on content related to pornography, suicide, self-harm, and eating disorders for young users.
Platforms like TikTok and Instagram will be required to curb the spread of violent, hateful, or abusive material, as well as online bullying and dangerous challenges within children’s accounts. This move is part of broader efforts to make the internet safer for young users and involves strict regulations on the types of content accessible to them.
The Act has stirred debates, with some believing that it might restrict access to educational content on sensitive topics. Concerns have also been raised about the effectiveness of the legislation, with critics arguing that it may be too broad and not sufficiently address high-risk areas like livestreaming and direct messaging.
This legislative move represents a significant step in the direction of internet regulation focused specifically on the safety and well-being of children online in the UK.