The UK’s internet regulator, Ofcom, has published the first set of final guidelines for online service providers subject to the Online Safety Act. This move initiates a three-month countdown to the law’s first compliance deadline. Ofcom’s decision marks a major milestone, with online providers now legally required to protect their users from illegal harm.
Providers have until March 16, 2025, to assess the risk of illegal harms on their services. From March 17, 2025, they will need to implement the safety measures outlined in the Codes or use other effective measures. More than 100,000 tech firms could fall under the Act’s scope, required to protect users from a variety of illegal content, including terrorism, hate speech, child sexual abuse, and financial offenses.
Non-compliance could result in fines up to 10% of global annual turnover or £18 million, whichever is greater. The Act applies to all providers of services with ties to the UK, regardless of their location. This includes a diverse range of online services, from tech giants to very small service providers across multiple sectors including social media, dating, gaming, search, and pornography.
Ofcom CEO Melanie Dawes highlighted that significant operational changes by tech companies are expected in 2025.
Ofcom’s online safety compliance measures
“What we’re announcing today is a big moment for online safety.
Tech companies will need to start taking proper actions regarding their algorithms to prevent illegal content,” she said. Dawes also mentioned that in January, new age-check requirements will be introduced, followed by finalized rules on wider child protection measures by April. Ofcom’s policy statement is a starting point, with the regulator continuing to work on additional measures to address evolving tech developments, including generative AI.
The regulator also plans protocols for emergency events, measures for blocking accounts sharing child sexual abuse material, and using AI to tackle illegal harms. A significant aspect of the law is the introduction of criminal liability for senior executives in certain circumstances, potentially holding tech CEOs personally accountable for non-compliance. The Online Safety Act imposes certain “duties of care” on tech firms to ensure they take responsibility for harmful content uploaded and spread on their platforms.
Though the act passed into law in October 2023, it was not yet fully in force — but Monday’s development effectively marks the official entry into force of the safety duties. Under the Online Safety Act, Ofcom can levy fines of as much as 10% of companies’ global annual revenues if they are found in breach of the rules. For repeated breaches, individual senior managers could face possible jail time, while in the most serious cases, Ofcom could seek a court order to block access to a service in the U.K. or limit its access to payment providers or advertisers.
British Technology Minister Peter Kyle said, “Ofcom’s illegal content codes are a material step change in online safety meaning that from March, platforms will have to proactively take down terrorist material, child and intimate image abuse, and a host of other illegal content, bridging the gap between the laws which protect us in the offline and the online world.”
“If platforms fail to step up, the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites,” Kyle added.