Will the Online Safety Act protect young people’s mental health?

0
3


At a glance

The UK Online Safety Act makes it the duty of social media companies and platform providers to protect children and adults from harm by making them responsible for their user’s safety whilst on their platforms.

  • There is evidence that children accessing pornographic and sexually explicit material does have a negative impact on their mental health. The Act brings in new age verification requirements to prevent this.
  • There is also a documented link between online material that promotes self-destructive behaviours and negative mental health outcomes. The Act requires online platforms to risk assess their content and remove material that could lead to these behaviours.
  • Legislation, such as the Online Safety Act, alone will not keep children and young people safe online. Media literacy and teaching young people to navigate the online environment safely in a balanced way is just as important. 

Want to fund research like this?

Read MQ's report

Recently the Online Safety Act has returned to public discussion as changes have been implemented to the way adult content is accessed in the UK.
 
 

These changes have led to some high-profile comments from politicians, and even a petition to repeal the bill which has generated over 400,000 signatures.  The Act itself passed in 2023, however, the roll out of law has been staggered, hence the renewed public interest as some new rules come into place.

The act makes it the duty of social media companies and platform providers to protect children and adults from harm by making them responsible for their user’s safety whilst on their platforms.  Failure to comply with these rules can result in large fines, company executives being jailed and even sites being banned in the UK.

But what are the actual changes that the Act is implementing? How are they supposed to protect children and young people from harm? And, most importantly, what is the evidence that these changes will actually protect mental health?

 

 

 

Removing illegal content 

A rule which came into force in December 2024 requires all companies to take action against illegal content being shared on their platforms or illegal transactions and activity taking place through their platforms and services.   

The types of illegal content the Act outlines include images of sexual abuse, the selling of illegal drugs or weapons, the sharing of state-sponsored disinformation through the Foreign Interference Offence, and preventing exploitative or coercive behaviours. 

A survey carried out by Savanta for the Mental Health Foundation earlier this year, found that 68% of young people (aged 16-21) reported having seen harmful or disturbing content online. This doesn’t only normalise harmful behaviours for impressionable children. Viewing violent or abusive content can also cause mental distress and even trauma, particularly in young people. 

‘Media Induced Trauma’ is well documented, particularly following high-profile traumatic events. In  2021, researchers at Boston University found that following a school shooting, people who viewed excessive coverage or upsetting content, including graphic videos of the shooting itself, were more likely to have symptoms of PTSD and other mental disorders.  

The new law makes it the responsibility of social media companies, tech firms and internet providers to carry out a risk assessment of illegal content being shared on their platforms and take appropriate steps to remove it on an ongoing basis. 

 

Age verification 

Putting the onus onto companies to remove illegal content isn’t the only example of the new rules. The Act also requires platforms to protect children from viewing content which might not be illegal, but which could be otherwise harmful to developing minds. For example, pornography websites now have a duty to verify the ages of users before they can access explicit content.  

Pornography, particularly extreme or hardcore pornography, can be damaging for children’s mental health. It can lead to addiction, problems with emotional intimacy and forming relationships, and skewed understandings of sex and gender roles in relationships. Exposure to pornography at a young age makes children more likely to develop symptoms of anxiety and more vulnerable to sexual exploitation as sharing explicit images with young children is a common grooming tactic of predators. The average age children are first exposed to online pornography is 12 years old, with 15% of children being under the age of 10. 

As of July 2025, websites in the UK hosting pornography must verify users ages before allowing them to access explicit content. It is these age verification rules which have caused some backlash. Critics of the new rules argue that as users now need to verify their age by presenting ID, or using other tools such as facial recognition software, there are risks of data breaches or users privacy being threatened. 

 

 

Social media use 

In addition to pornography sites, there is a specific onus on social media sites to risk-assess the content on their platforms and if it can be harmful for children, to then enforce appropriate age limits. Most children over the age of 10 have a presence on social media with the most popular being YouTube and WhatsApp: 

This is despite most social media platforms already having policies that restrict access to children. (Many having 13 as the age user accounts are allowed). 

Last year, MQ published a report into the impact internet use has on children and young people’s mental health. This report found that internet activities, including social media, are not bad per se and can be neutral or even beneficial to young people’s mental health when balanced with other activities. 

The real risk is when internet activities are not balanced or replace healthy habits, such as face-to-face interactions, sleep, homework or if it leads to addictive use. Another risk is the quality of the experiences that are had on social media which can often be negative (such as cyber bullying, accessing harmful content or social comparison).  

Despite this evidence of the nuanced impact social media platforms have on children and young people’s mental health, many people still believe that restricting access to social media to 15, 16 or even 18 is necessary. In Australia for example, this December will see new laws come into place restricting people under the age of 16 from having social media accounts.  

The UK Online Safety Act doesn’t call for specific age limits, but instead requires providers and social media companies to carry out rigorous risk assessments and then enforce age restrictions appropriately and robustly.  

 

 

Removing content that promotes self-destructive behaviours 

The Act is described at ‘Putting children first’ and is aimed at protecting both children from accessing things which could cause them distress or harm as well as protecting vulnerable adults. 

One of the biggest changes is a new law requiring sites to rapidly remove content which promotes self-harm or suicide.  This is enforced through a new criminal offence for intentionally encouraging or assisting serious self-harm.  

This is important as a survey carried out by the Children’s Commissioner found that young people said material about self-harm, suicide and eating disorders was ‘prolific’ and that violent content was ‘unavoidable’.  

Viewing content that glorifies harmful behaviours such as self-harm or eating disorders, or even content that shares tips, can have catastrophic effects on children and young people’s mental health. OfCom, who are the regulator in charge of enforcing the new rules, published a report by  Ipsos UK and TONIC Research which found that:  

“Across the board, young people consistently described the triggering effects of harmful content. The most prevalent negative emotions we heard were feelings of intense anxiety, which often stemmed from the distressing and graphic nature of the content. In addition to anxiety, fear was also mentioned as a prominent emotion experienced from encountering this type of content, in which young people recounted instances where they felt genuine fear for the safety of others. “

Online content: Qualitative Research. Experiences of Children encountering online content relating to eating disorders, self-harm and suicide.

 The Act recognises that there is often content shared online that is ‘legal but harmful’. For example, influences who promote extreme diets or promote unobtainable standards of beauty. Content such as this, whilst not illegal and therefore not in the scope of the Act, can still influence the behaviours of young people and evidence suggests. can even lead to eating disorders. 

Driven into Darkness: How TikTok’s ‘For You’ Feed Encourages Self-Harm and Suicidal Ideation, Amnesty International.

The report investigates TikTok’s platform’s ‘For You’ feed, which relies on AI to personalize content based on user interactions. It concludes that through seamless personalization, TikTok has created a platform that is highly addictive, and exposes users to serious health risks. For children and young people across the world living with depression, anxiety and other mental health issues, it can quickly send them down “rabbit holes” of depressive content, including videos romanticizing, trivialising and encouraging self-harm and suicide. It emphasizes the lack of transparency in TikTok’s content moderation policies and the need for the platform to prioritize user safety over engagement. It calls on governments to hold tech companies accountable for the potential risks associated with their algorithms and advocates for robust AI regulations.

 

 

The report also found that children who had a history of mental illness were particularly vulnerable. Some young people reported that during periods of illness they had been ‘consumed and immersed in the [harmful] content’ with this causing distressing cycles of ill-health. 

This is exacerbated by the increasing amount of misinformation which is posted online. For example, videos about ADHD on channels like TikTok which are designed for maximum engagement rather than accuracy. Content like this deliberately takes relatable human experiences, such as day dreaming or procrastination, and tells people this is a symptom of ADHD. This may make some young people believe that they have ADHD or other neurodiversity or mental health conditions when they are actually just experiencing the normal ups and downs of life.  

Whilst the changes in the Act that require companies to identify and remove content that promotes self-harm, suicide and eating disorders has been welcomed, many critics don’t think the rules go far enough. In a letter to the Home Secretary Yvette Cooper, many charities including the NSPCC and Banardos called on the government to expand the rules to cover private messaging services such as Snapchat and WhatsApp. They pointed out that in cases where child sex abuse images were shared, 50% of cases involved Snapchat making this a problematic loophole in the new law. 

 

 

Are we locking the door after the horse has bolted? 

The Online Safety Act is seen by many as an imperfect start. Whilst putting the responsibility on companies to ensure the safety of their users is a positive development, there are fair criticisms of the new rules. 

The act focuses on platforms where users can share and swap content and information, and particularly on the aspects which have had more protagonism in the public debate. It doesn’t address technologies where content is sourced differently such as direct messaging platforms or generative text engines such as Chat GPT. It also does not address predatory practices in gaming,  or regulates the mechanisms for hyper targeted content, which can be harmful, particularly in young ages.  

One of the key takeaways from MQ’s report last year was that retrofitting safety mechanisms into online platforms is often ineffective and usually done in response to people being harmed. 

Instead, we should be looking more in-depth into algorithmic recommendation systems intentionally building a digital future that is safe, healthy and empowering, where safeguards are required to be considered and incorporated into new technology from the start rather than as an afterthought.  

In the meantime, it’s up to us all to help children and young people become media literate, helping them to navigate the online environment as legislation alone will not keep them completely safe. 

The post Will the Online Safety Act protect young people’s mental health? first appeared on MQ Mental Health Research.