The British government is considering sweeping new laws to regulate problematic content online, ranging from terrorist propaganda to fake news. A new proposal unveiled on Monday would impose a new “duty of care” on websites hosting user-submitted content. Under the plan, a new UK agency would develop codes of practice outlining how sites should deal with various types of harmful content.
The new proposal follows last month’s mass shooting in Christchurch, New Zealand, which left 50 people dead. In the wake of that attack, Australia passed a new law that requires major platforms to quickly remove violent online material—or face harsh fines and possibly even jail time. On Monday, a committee of the EU parliament backed a law that would fine online platforms up to 4 percent of their revenue if they failed to take down terrorist content within four hours.
Britain’s proposal is much broader, requiring technology companies to police their platforms for a wide range of objectionable material. Companies could face fines if they don’t remove harmful material quickly.
A 100-page white paper from Theresa May’s government details the many categories of content that would be governed by the new rules, including child pornography, revenge pornography, cyberstalking, hate crimes, encouragement of suicide, sale of illegal goods, sexting by minors, and “disinformation.” The proposal would also try to stop inmates from posting online content in violation of prison rules.
Such a sweeping proposal would be unlikely to pass muster in the United States, where the First Amendment sharply limits government regulation of online content. But America is unusual; most countries have a much narrower concept of free speech that leaves governments substantial latitude to regulate content they regard as harmful.
Still, a big question is how to crack down on harmful speech without unduly burdening the speech of legitimate users—or of unduly burdening the operators of smaller websites. Fundamentally, regulators have two options here. They can require online operators to take down content only after they’ve been notified of its existence, or they can require platforms to proactively monitor uploaded content.
Current law
Under the EU’s E-Commerce Directive, current UK law shields online service providers from liability for content unless they have actual knowledge of its existence. But the UK government is now re-thinking that approach.
“The existing liability regime only forces companies to take action against illegal content once they have been notified of its existence,” the white paper says. “We concluded that standalone changes to the liability regime would be insufficient.”
Instead, the UK government says it’s opting for a “more thorough approach,” requiring technology companies to “ensure that they have effective and proportionate processes and governance in place to reduce the risk of illegal and harmful activity on their platforms.”
Of course, forcing technology content to proactively monitor its platforms for objectionable content could create problems of their own, leading to unnecessary removal of legitimate content or eroding user privacy.
UK regulators say there’s no need to worry about this. “The regulator will not compel companies to undertake general monitoring of all communications on their online services, as this would be a disproportionate burden on companies and would raise concerns about user privacy,” the document states. However, it says, there is “a strong case for mandating specific monitoring that targets where there is a threat to national security or the physical safety of children.”
Vague by design
If that seems vague, that’s by design. Rather than spelling out the precise obligations of online service providers in its initial proposal, the government plans to create a new regulatory agency and have it write up specific guidelines for the various types of unsavory content that could show up on technology platforms.
Monday’s publication of the online-harms white paper is just the first step to developing these new regulations. The public now has 12 weeks to comment on the proposal. The government will then take those comments into account as it drafts a final legislative proposal.
If something like this proposal does become law, it could have significant impacts beyond the borders of the United Kingdom. The Internet is global, and we can expect the United Kingdom to demand that objectionable content be made inaccessible in the UK regardless of who originally uploaded it. In principle, major platforms could use geoblocking technology to prevent Britons from accessing objectionable content hosted in the United States or elsewhere. But technology companies may decide it’s easier to just take down objectionable content for everyone—especially if other jurisdictions pass similar laws.
As a result, America’s strong free-speech tradition might become less and less relevant online, as online content policies are increasingly driven by countries with more activist approaches.
https://arstechnica.com/?p=1488071