msn
 
 
 
 
 
Report 0303 330 0288
 
 
 
Legislation

Online Safety Act 2023

 

Online Safety Act 2023

 

The Online Safety Act 2023 protects individuals from user-generated online content (with a focus on illegal content and content that is harmful to children) by giving regulators the power to act where users are exposed to abuse, threats, or hate online.

(The information below is for guidance only - if necessary, seek legal advice)

 
 
 

The Act's duties apply to search services and services that allow users to post content online or to interact with each other. This includes a range of websites, apps and other services, including social media services, gaming platforms, video-sharing sites, online forums, dating services, and online instant messaging services.

The Act applies to services even if the companies providing them are outside the UK should they have links to the UK. This includes if the service has a significant number of UK users, if the UK is a target market, or it is capable of being accessed by UK users and there is a material risk of significant harm to such users.

The Online Safety Act tackles a range of harmful online content, particularly that is harmful to children i.e. ensuring that child users have age-appropriate experiences and are shielded from harmful content. However, the Act also protects adult users from as well such as having more control over what they see. The types of content falls into the following categories of priority and non-priority offences.

Priority offences are any illegal content (i.e. acts that are offences under UK law) which amount to serious offences. Such content has to be addressed i.e. removal including proactive interventions to prevent harm / reduce risks of the content appearing on the platform. Examples include:

  • hatred such as stirring up racial or stirring up hatred on the basis of religion
  • terrorism
  • harassment
  • stalking
  • coercive control
  • threats
  • abuse
  • intimate image abuse
  • extreme pornography
  • child sexual abuse / exploitation
  • sexual exploitation of adults
  • human trafficking
  • assisting or encouraging suicide
  • financial fraud
  • selling illegal drugs / weapons
  • facilitating unlawful immigration
  • proceeds of crime

Non-priority offences are illegal content that is less serious and although needs to be addressed, does not require the same levels of proactive interventions to reduce risk of it appearing on the platform as priority (serious) offences. Examples include:

  • False communications - Sending messages known to be false with the intention to cause harm.
  • Threatening communications - Sending messages that threaten violence, serious injury, or financial loss.
  • Certain types of pornography - While extreme pornography is a priority offence, other forms of pornography might fall under non-priority illegal content.
  • Content that promotes self-harm - Content encouraging or providing instructions for suicide or self-harm, unless it's considered "primary priority content harmful to children".
  • Online harassment and stalking - Depending on the severity and context, some forms of online harassment and stalking might be considered non-priority.
  • Certain fraud and financial offences - While some financial crimes are priority offences, others might be non-priority.

Note: Mis- and disinformation will be captured by the Online Safety Act where it is illegal or harmful to children. Services will be required to take steps to remove illegal disinformation content if they become aware of it on their services. This includes the removal of illegal, state-sponsored disinformation through the Foreign Interference Offence, forcing companies to take action against a range of state-sponsored disinformation and state-linked interference online.

Ofcom has produced codes of practices to provide guidance to the online platforms in relation to the harmful content listed above, which can be found on its website.

Online platforms must:

  • Remove illegal content swiftly
  • Enforce age verification (especially for pornographic content)
  • Carry out risk assessments
  • Provide user tools to report and block content
  • Be transparent about moderation policies


Ofcom is the independent regulator responsible for enforcing the Act. It has published Codes of Practice and guidance to help companies comply. Non-compliance can result in:

  • Require changes to platform design.
  • Fine companies up to £18 million or 10% of global revenue - whichever is higher.
  • Hold executives criminally liable in extreme cases.
  • Formal warnings or in extreme cases court orders to block services in the UK.

Examples of situations Muslims may face online- such as on social media or in comments sections - which should be reported are provided below. If the platform fails to act (i.e. remove offending post), the matter can be escalated to Ofcom. Reporting also helps platforms to strengthen their moderation systems to prevent similar content from appearing. It is also important to be familiar with privacy settings, block features, and content filters on all platforms. The Online Safety Act requires platforms to make these tools easily accessible.
 

Join the Muslim Safety Network

Sign up to stay informed and provide us with relevant information such as on hate crimes so we amplify it. Your mobile number will not be visible on the new community WhatsApp group.

 

© 2026 MuslimSafetyNet, Inc. All Rights Reserved.