Will the Online Safety Act protect young people’s mental health?

0
4


Recently the Online Safety Act has returned to public discussion as changes have been implemented to the way adult content is accessed in the UK.

 

 

These changes have led to some high-profile comments from politicians, and even a petition to repeal the bill which has generated over 400,000 signatures.  The Act itself passed in 2023, however, the roll out of law has been staggered, hence the renewed public interest as some new rules come into place.

The act makes it the duty of social media companies and platform providers to protect children and adults from harm by making them responsible for their user’s safety whilst on their platforms.  Failure to comply with these rules can result in large fines, company executives being jailed and even sites being banned in the UK.

But what are the actual changes that the Act is implementing? How are they supposed to protect children and young people from harm? And, most importantly, what is the evidence that these changes will actually protect mental health?

 

 

 

Removing illegal content 

A rule which came into force in December 2024 requires all companies to take action against illegal content being shared on their platforms or illegal transactions and activity taking place through their platforms and services.   

The types of illegal content the Act outlines include images of sexual abuse, the selling of illegal drugs or weapons, the sharing of state-sponsored disinformation through the Foreign Interference Offence, and preventing exploitative or coercive behaviours. 

A survey carried out by Savanta for the Mental Health Foundation earlier this year, found that 68% of young people (aged 16-21) reported having seen harmful or disturbing content online. This doesn’t only normalise harmful behaviours for impressionable children. Viewing violent or abusive content can also cause mental distress and even trauma, particularly in young people. 

‘Media Induced Trauma’ is well documented, particularly following high-profile traumatic events. In  2021, researchers at Boston University found that following a school shooting, people who viewed excessive coverage or upsetting content, including graphic videos of the shooting itself, were more likely to have symptoms of PTSD and other mental disorders.  

The new law makes it the responsibility of social media companies, tech firms and internet providers to carry out a risk assessment of illegal content being shared on their platforms and take appropriate steps to remove it on an ongoing basis. 

 

Age verification 

Putting the onus onto companies to remove illegal content isn’t the only example of the new rules. The Act also requires platforms to protect children from viewing content which might not be illegal, but which could be otherwise harmful to developing minds. For example, pornography websites now have a duty to verify the ages of users before they can access explicit content.  

Pornography, particularly extreme or hardcore pornography, can be damaging for children’s mental health. It can lead to addiction, problems with emotional intimacy and forming relationships, and skewed understandings of sex and gender roles in relationships. Exposure to pornography at a young age makes children more likely to develop symptoms of anxiety and more vulnerable to sexual exploitation as sharing explicit images with young children is a common grooming tactic of predators. The average age children are first exposed to online pornography is 12 years old, with 15% of children being under the age of 10. 

As of July 2025, websites in the UK hosting pornography must verify users ages before allowing them to access explicit content. It is these age verification rules which have caused some backlash. Critics of the new rules argue that as users now need to verify their age by presenting ID, or using other tools such as facial recognition software, there are risks of data breaches or users privacy being threatened. 

 

 

Social media use 

In addition to pornography sites, there is a specific onus on social media sites to risk-assess the content on their platforms and if it can be harmful for children, to then enforce appropriate age limits. Most children over the age of 10 have a presence on social media with the most popular being YouTube and WhatsApp: