UK to tech firms: 'Tame algorithms' to safeguard children
The UK government has directed social media platforms like Facebook, Instagram, and TikTok to modify their algorithms to filter out or downgrade harmful content, as part of new measures to protect children.
These changes, announced Wednesday by Ofcom, are among over 40 steps required under Britain's Online Safety Act, which took effect in October.
Platforms are now mandated to implement stringent age verification to block access to content involving suicide, self-harm, and pornography for minors, according to the regulator.
Melanie Dawes, Chief Executive of Ofcom, highlighted that children's online experiences are often marred by harmful content that they cannot escape or control. "Under the new online safety laws, our proposed Codes place the responsibility of keeping children safe squarely on tech firms," she stated. "They must tame aggressive algorithms that target children with harmful content in their personalized feeds and implement age verification to ensure a suitable experience for their age."
Social media companies utilize complex algorithms to prioritize content and engage users. This method, however, often results in the promotion of similar content, increasing the exposure of children to harmful material.
Technology Secretary Michelle Donelan emphasized the necessity of introducing realistic age checks and managing algorithms to fundamentally alter how children in Britain navigate the online world. "To platforms, my message is clear: engage with us and prepare. Do not wait for enforcement and hefty fines – step up, meet your responsibilities, and act now," she urged.
Ofcom plans to finalize its Children's Safety Codes of Practice within a year after a consultation period ending on July 17. Once approved by parliament, the regulator will begin enforcing the rules, supported by fines for non-compliance.