
EU regulators have proposed strict curbs on the use of facial recognition in public spaces, limiting the controversial technology to a small number of public-interest scenarios, according to new draft legislation seen by the Financial Times.
In a confidential 138-page document, officials said facial recognition systems infringed on individuals’ civil rights and therefore should only be used in scenarios in which they were deemed essential, for instance in the search for missing children and the policing of terrorist events.
The draft legislation added that “real-time” facial recognition—which uses live tracking rather than past footage or photographs—in public spaces by the authorities should only ever be used for limited periods of time, and it should be subject to prior consent by a judge or a national authority.
The document comes as privacy advocates, politicians, and European citizens have become increasingly vocal about regulating the use of live facial recognition. At present, there are no clear rules around how and where the technology can be used on the general public, so the proposed legislation would be the first to codify these limitations into law.
The introduction of tougher curbs on the use of facial recognition technology would likely reignite debate over whether the practice should be banned altogether, as experts warn that it is still fraught with risks.
In a landmark ruling last August, the UK Court of Appeal said that the use of facial recognition technology by South Wales Police was unlawful and found that it breached privacy rights, data protection laws, and equality laws.
The EU’s draft legislation also addressed a range of related issues such as algorithmic bias, arguing that technology used in contexts such as recruitment and finance should be developed so as not to replicate “historical patterns of discrimination” against minority groups.
EU regulators proposed hefty fines of up to 6 percent of a company’s global turnover if it is found to have abused artificial intelligence in this way or fails to detect biases when hiring workers or providing services.
They added that so-called social-scoring practices, which assess a person’s trustworthiness from behavioral data gathered about them, should also be banned. In China, for instance, a system is being developed that calculates a person’s credit score using information about their online habits.
“The social score obtained… may lead to the detrimental or unfavorable treatment of [people or groups]… which are unrelated to the context in which the data was originally generated,” the leak said.
The proposals, which will be presented on Wednesday in Brussels, will now be debated by the European Parliament and member states until at least 2023 before becoming law.
© 2021 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.
https://arstechnica.com/?p=1758740