Insights

The platform layer is not enough: why online safety needs the network

Mark Mullings

Read Time Mins

Governments around the world are now legislating online safety. Platform liability is the centrepiece of every major framework. But the model has a structural flaw that no amount of enforcement can fix.


A global regulatory moment

Online safety regulation has moved from debate to statute across multiple continents in fewer than three years. The United Kingdom’s Online Safety Act came into force in 2023 and now applies to more than 100,000 platforms that serve British users. Ofcom is phasing in its implementation and enforcement programme through 2026, with the largest platforms facing the most stringent and earliest obligations.

The European Union’s Digital Services Act has followed a similar trajectory. In July 2025, the European Commission published updated guidelines on protecting minors online, targeting algorithmic amplification and age-verification gaps. By October 2025, the Commission had opened formal investigations into Snapchat, YouTube, Apple’s App Store, and Google Play. Each case centres on the same question: are platforms doing enough to protect users, particularly children, from harm?

In Australia, Phase 2 of the Online Safety Codes took effect between December 2025 and March 2026, extending obligations across search, messaging, and app distribution services. The parliament passed legislation banning under-16s from social media platforms outright, placing a legal duty on platforms to enforce age-based access at the point of registration.

The United States has moved more slowly, but the Kids Online Safety Act was reintroduced in the 119th Congress in May 2025, following Senate passage and House failure in the previous session. Momentum is building. The direction of travel is consistent across all four jurisdictions: platforms must bear direct responsibility for the harm their services enable.


100,000+  platforms now subject to UK Online Safety Act obligations, across social media, search, messaging, and user-to-user services


A compelling theory with a structural flaw

The platform-liability model is compelling in theory. Harms are generated on platforms. Platforms control the rules of engagement. They hold the data. And the profit. It follows logically that platforms should be accountable.

The theory is not wrong. It is incomplete. Platform liability treats online safety as a problem that can be solved within the application layer and applies regulatory force to make that happen faster. What it cannot do is fix the structural reality that platforms are not the only layer of digital infrastructure that matters.

“Platform liability is not wrong. It is incomplete. No regulatory framework can fix a structural gap by legislating around it.”

Consider what the application layer does not control. It does not control the network traffic that flows beneath its services. Nor does it not control the devices that connect to it. It does not control the DNS queries that precede every session, or the behavioural patterns that emerge across different services and over time. It has no visibility into the connections that happen before a user lands on its interface, and no authority over what happens on other platforms or services the same user also visits.

More fundamentally: the platform-liability model assumes that platforms will comply in good faith, with sufficient competence, across every jurisdiction in which they operate. The EU’s ongoing investigations into some of the world’s largest and best-resourced technology companies suggest that assumption is optimistic. The structural flaw is not one that greater regulatory pressure will resolve.

Where the responsibility ends up

When the platform-liability model reaches its limits, the burden of protection shifts. It does not disappear. It disperses, typically towards households and individuals who have neither the tools, the time, nor the technical context to exercise it meaningfully.

Parents are told to use parental controls. Children are told not to share personal information. Families are advised to have conversations about online risk. All this guidance is offered in good faith. However, none of it constitutes a security architecture.

Parental controls are inconsistent across devices and operating systems. They require initial configuration and ongoing maintenance from users who may not understand what they are configuring. They fail silently when devices are updated, when children use alternative devices, or when they are disabled. The platforms that are most likely to generate harm are often the ones least likely to provide effective controls.

The same dynamic plays out in the enterprise context. Small and medium-sized businesses, which represent most of the economic activity in most countries, are frequently advised to adopt security tools, train staff, and monitor their own networks. The advice is correct. The capacity to follow it, consistently, across a business with limited IT resources, is another matter.

The consequence is a protection gap. The regulation exists. The obligations are real. The enforcement machinery is being assembled. But between the platform and the individual, there is a layer of infrastructure that online safety frameworks have not yet addressed.

The layer that sits beneath the chaos

Every connected device, every application session, every online interaction, passes through network infrastructure before it reaches its destination. The network is not a passive conduit. It’s an active participant in every connection that occurs within it. It sees every packet before it arrives. It can inspect, identify and act.

Network operators, specifically telecommunications providers and internet service providers, occupy a unique position. They are present at every point of connection, are not dependent on platform cooperation. They do not rely on end-user action. Their infrastructure is the consistent layer that every other part of the digital environment depends on.


98%  of businesses in most economies are SMEs. The majority have no dedicated security infrastructure. The network is often the only protective layer that will ever reach them.


Network-edge security uses this position deliberately. DNS interception identifies attempts to reach malicious infrastructure before a connection is established. Deep packet inspection identifies anomalous traffic patterns that indicate compromise or exploitation. Device fingerprinting identifies every device on a network, including unmanaged IoT devices that platforms cannot see and endpoint security cannot reach.

Behavioural intelligence builds on these signals. Rather than matching traffic against lists of known bad actors, which are always incomplete and always retrospective, behavioural intelligence watches for unusual patterns across the full device and network estate. A device that queries an unfamiliar domain, initiates an outbound connection to a remote server, and simultaneously runs an application that has no reason to communicate externally: individually these signals are ambiguous; together, they indicate risk.

“The network is the one layer that is always present, always active, and not dependent on platform cooperation or end-user competence.”

This approach works because it operates at the infrastructure layer, not the application layer. It’s not dependent on platforms disclosing what their services are doing. Nor dependent on users knowing what to configure or how to respond. It’s not dependent on the goodwill or compliance of any service operating within the network.

What this means for telecoms operators and ISPs

The practical implication for network operators is significant. The same infrastructure that delivers connectivity can deliver protection. The capability is not theoretical. It exists today, and operators who deploy it are already providing a category of protection that platform-liability frameworks cannot reach.

For consumer operators, this means every subscriber is protected from the moment they connect, regardless of the devices they use, the platforms they visit, or the choices they make. A household with inconsistent parental controls and a collection of unmanaged smart home devices receives the same baseline protection as a household that has invested time in configuring every setting correctly. The network does not discriminate by technical literacy.

For enterprise operators, network-edge protection addresses the gap that endpoint security and platform-level controls leave open. Unmanaged devices, contractor access, remote workers operating across multiple networks, shadow IT, and IoT proliferation all create exposure points that do not appear in any platform’s security reporting. Network visibility closes these gaps systematically.

Regulators are beginning to recognise this. The UK Online Safety Act places obligations squarely on user-to-user platforms and search services, but the broader policy direction in the UK and across the EU acknowledges that critical infrastructure providers, including telecoms operators, carry responsibility for the resilience of digital services. Australia’s under-16 social media ban places enforcement duties on platforms, but the enforcement mechanism requires network-level capability to be credible at scale.

Operators who build network-edge safety capability now are building ahead of the regulatory curve. They are also building a commercial proposition. Security-as-a-service, offered to subscribers as a differentiated feature of connectivity rather than a separately purchased product, generates new average revenue per user, reduces fraud exposure, and demonstrates regulatory responsibility to governments that are actively looking for infrastructure partners.

The layered model that actually works

Online safety does not have a single solution. The platform layer matters. Age verification, algorithmic transparency, content moderation, and terms of service enforcement are all necessary components of a credible safety framework. Regulation that compels platforms to take these obligations seriously is doing important work.

The question is not whether platform-liability regulation is wrong. The question is whether it is sufficient. The evidence, from the EU’s ongoing enforcement actions to Australia’s decision to legislate a social media ban rather than rely on platform compliance, suggests that regulators themselves are reaching the same conclusion.

A layered model distributes responsibility across the components that are actually present in every digital interaction. Platforms bear accountability for what their services do. Device manufacturers bear accountability for the security of the products they ship. Network operators bear accountability for the infrastructure they control. Each layer does what it is uniquely positioned to do. No single layer carries the weight of the entire system.

The network layer’s unique contribution is consistency. It is the layer that cannot be opted out of, cannot be configured incorrectly by an end user, cannot be bypassed by a non-compliant platform, and cannot be disabled by a device update. It is always present, always active, and always capable of enforcing policy at the point of connection.

That consistency is precisely what online safety frameworks lack when they rely on platform liability alone. Adding the network layer does not make regulation redundant. It makes regulation effective.


The incomplete promise

The governments legislating online safety today are doing something important. They are establishing that digital harm is a regulatory matter, that platforms cannot self-govern their way out of accountability, and that children, consumers, and small businesses deserve protection that does not depend on their own technical capability.

The promise of that legislation will remain incomplete for as long as it stops at the application layer. The network infrastructure that carries every connection offers a layer of protection that platforms cannot replicate, that households cannot configure into existence, and that regulators cannot legislate into being through platform liability alone.

That is the gap the network layer can fill. Not by claiming to solve online safety on its own, but by making one important layer stronger, more consistent, and less dependent on the goodwill or competence of the platforms and services operating above it. The operators who understand this are already building the infrastructure of trust that online safety regulation is trying to create.


About BlackDice Cyber

BlackDice Cyber is a telecom-native cybersecurity company headquartered in Leeds, United Kingdom. Its platform delivers AI-powered behavioural intelligence and real-time threat detection directly within operator infrastructure, enabling telecommunications providers to protect subscribers, generate new revenue, and meet evolving regulatory requirements. BlackDice operates globally in partnership with leading telecoms operators.

To learn more, visit blackdice.ai

Don`t copy text!