When someone posts something illegal or harmful on Facebook, Twitter, or YouTube, who’s responsible? The person who posted it, or the platform that hosted it? This question has shaped some of the most important laws governing the internet today.
The answer, in most countries, is surprisingly generous to the platforms. It’s called “safe harbour” protection, and it’s the reason tech companies can let billions of people post content without getting sued into oblivion every time someone breaks the law.
What Safe Harbour Actually Means
Think of online platforms as landlords of digital space. If a tenant commits a crime in their apartment, you wouldn’t normally blame the landlord. Safe harbour works the same way: as long as platforms follow certain rules, they’re not legally responsible for what their users post.
Without this protection, websites would either have to review every single post before it went live (impossible at scale) or they’d simply shut down user-generated content altogether. No comments, no reviews, no social media as we know it.
How Different Countries Handle It
United States: Section 230
The US has the most protective approach, thanks to Section 230 of the Communications Decency Act from 1996. It’s only 26 words long, but it’s been called “the law that created the internet.”
Section 230 says platforms aren’t publishers of user content and can’t be held liable for it. They can moderate content in good faith without losing protection. This gives platforms huge freedom to decide what stays up and what comes down.
There’s been growing pushback, though. Critics argue Section 230 lets platforms avoid responsibility for harmful content like misinformation and hate speech. Both Republicans and Democrats have called for changes, though for different reasons.
European Union: The E-Commerce Directive and DSA
Europe takes a more conditional approach. Under the E-Commerce Directive, platforms get safe harbour if they’re acting as neutral intermediaries. Once they know about illegal content, they need to remove it quickly or they lose protection.
The newer Digital Services Act, which started applying in 2024, builds on this. It keeps safe harbour but adds stricter obligations. Large platforms must have better content moderation systems, be transparent about their algorithms, and respond faster to illegal content. The focus is on making platforms more accountable without making them liable for everything users post.
United Kingdom: Staying Close to Europe
The UK kept similar rules after Brexit. Its Electronic Commerce Regulations mirror the EU’s approach: platforms aren’t liable if they don’t know about illegal content, but they must act when they find out.
The Online Safety Act, which passed in 2023, adds new duties. Platforms have to protect users from illegal content and, for the largest services, prevent harm to children and tackle legal but harmful content. It’s a shift toward more proactive responsibility, though safe harbour principles still apply.
India: Walking a Tighter Rope
India’s approach is built on Section 79 of the Information Technology Act, 2000, which offers safe harbour to intermediaries. But the protection comes with conditions that are stricter than most Western countries.
To qualify, platforms must follow specific “due diligence” requirements: publish clear usage rules, appoint grievance officers to handle complaints, and cooperate with law enforcement when asked. If they don’t follow these rules, they lose their immunity.
The IT Rules 2021: Turning Up the Heat
The real game-changer came with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These rules fundamentally reshaped what platforms must do to keep their safe harbour status.
Here’s what changed. Social media companies with over 5 million users now need to appoint three specific officers based in India: a chief compliance officer, a nodal contact person, and a grievance officer. These aren’t just boxes to tick. These people are accountable for the platform’s compliance.
Platforms must remove content within 36 hours of receiving a court order or government notification. For specific content like nudity or morphed images of women, they have just 24 hours. That’s much faster than most countries require.
The most controversial part is the traceability requirement. Messaging platforms must be able to identify the “first originator” of information when authorities demand it for serious offences. WhatsApp pushed back hard on this, arguing it would break end-to-end encryption. The case is still being fought in Indian courts.
There’s also a monthly reporting requirement. Platforms must publish transparency reports showing how many complaints they received, what action they took, and details about content removed proactively using automated tools.
The rules create a three-tier grievance mechanism. Users can complain to the platform first, then to a self-regulatory body, and finally to a government oversight committee. It’s meant to provide accountability, though critics worry about government overreach.
Why India’s Approach Is Different
India’s stricter stance reflects its specific concerns. With such a large population online and multiple languages, misinformation can spread fast and have serious real-world consequences. There have been instances of mob violence triggered by viral rumours on WhatsApp.
The government argues these rules are necessary to maintain public order and national security. Critics counter that they give authorities too much power to pressure platforms into censoring legitimate speech, especially political criticism.
Australia: Focused on Specific Harms
Australia doesn’t have a single safe harbour law for all content. Instead, it has carved out protections in specific areas, like copyright and defamation, while imposing obligations in others.
The country has led the way on niche issues. Its Online Safety Act targets cyberbullying and image-based abuse. It also passed controversial laws requiring platforms to negotiate payment with news publishers and to remove abhorrent violent content quickly (after the Christchurch shooting was livestreamed).
Australia’s willing to hold platforms directly accountable in ways other countries haven’t tried yet.
Other Approaches Worth Noting
Canada is working on an Online Harms Act that would create new duties to remove specific content like child exploitation and terrorist material within 24 hours.
Singapore’s laws require platforms to correct or remove falsehoods when the government directs them to, which raises free speech concerns but shows another model of platform regulation.
The Common Threads
Despite differences, most countries agree on a few things. Platforms shouldn’t be treated exactly like publishers, because they don’t create the content. But they also shouldn’t be completely hands-off when harmful or illegal stuff spreads on their watch.
The sweet spot everyone’s looking for is somewhere between “not our problem” and “responsible for everything.” That spot keeps shifting as we learn more about how platforms shape public conversation, mental health, elections, and social cohesion.
Why This Matters
Safe harbour laws determine what kind of internet we have. Too much protection and platforms have no incentive to deal with toxic content. Too little and they’ll censor aggressively to avoid legal risk, or small startups won’t be able to compete with giants who can afford compliance teams.
We’re in a transition period. The original safe harbour laws were written when the internet was much smaller and platforms were truly passive hosts. Now they use algorithms to recommend content, they profit from engagement, and they have enormous influence over public discourse.
Countries are experimenting with updated rules that keep the good parts of safe harbour while adding accountability. There’s no consensus yet on where to draw the lines, and the laws will keep evolving as technology and society change.
What seems clear is that the old “just a platform” defence is wearing thin, and the future probably involves more responsibility, not less.
Visit My Legal Pal for more useful insights on various legal domains and topics.
