Google is developing a free moderation tool that smaller websites can use to identify and remove terrorist material, as new legislation in the UK and the EU compels Internet companies to do more to tackle illegal content.
The software is being developed in partnership with the search giant’s research and development unit Jigsaw and Tech Against Terrorism, a UN-backed initiative that helps tech companies police online terrorism.
“There are a lot of websites that just don’t have any people to do the enforcement. It is a really labor-intensive thing to even build the algorithms [and] then you need all those human reviewers,” said Yasmin Green, chief executive of Jigsaw.
“[Smaller websites] do not want Isis content there, but there is a ton of it all over [them],” she added.
The move comes as Internet companies will be forced to remove extremist content from their platforms or face fines and other penalties under laws such as the Digital Services Act in the EU, which came into force in November, and the UK’s Online Safety bill, which is expected to become law this year.
The legislation has been pushed by politicians and regulators across Europe who argue that Big Tech groups have not gone far enough to police content online.
But the new regulatory regime has led to concerns that smaller start-ups are not equipped to comply and that a lack of resources will limit their ability to compete with larger technology companies.
“I have noticed a big shift in the [leading] platforms becoming much more effective at moderating, and that pushes terrorist content and COVID hoax claims to [other sites],” Green added.
A report by the Global Internet Forum to Counter Terrorism in 2021 estimated that for every 10,000 posts on Facebook, six would contain terrorist or extremist content. On smaller platforms, this figure could be as high as 5,000, or 50 percent of content.
GIFCT, a non-governmental organization founded by Facebook, Microsoft, Twitter, and YouTube in 2017 to foster partnerships between many tech platforms, is supporting the project by Jigsaw. The non-governmental organization has a database of terrorist content shared across its membership of tech companies, which moderation systems can use to detect existing materials.
On December 13, Facebook and Instagram-owner Meta launched open source software that other platforms can deploy to match terror content to existing images or videos in the database and highlight them for urgent human review.
Jigsaw’s tool aims to tackle the next step of the process and help human moderators make decisions on content flagged as dangerous and illegal. It will begin testing with two unnamed sites at the beginning of this year.
“In our experience, we find that terrorists seek to exploit smaller platforms where content moderation is challenging due to limited resources,” said Adam Hadley, director of Tech Against Terrorism.
Jigsaw has about 70 staff, primarily based in Google’s offices in New York. Green, who became chief executive in July, said the loss-making division was not expected to become profitable.
“There’s an understanding that there’s a long-term business return… Google needs a healthier Internet,” Green added. “We are helping Google and helping the Internet in a way that delivers value even though it isn’t monetary.”
© 2023 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.
https://arstechnica.com/?p=1907156