Faculty of law blogs / UNIVERSITY OF OXFORD

Accountability key to the adoption of surveillance technology

The media has been quick to pick up on news that San Francisco has banned the use of Facial Recognition Technology (FRT).  S 19B2(d) of the Ordinance,  introduced by San Francisco Supervisor Aaron Peskin and passed by the city’s Board of Supervisors provides that notwithstanding the ability to use other forms of surveillance, ‘it shall be unlawful for any Department to obtain, retain, access or use: 1) any Face Recognition Technology or (2) any information obtained from Face Recognition Technology.’ Inadvertent or unintentional receipt, retention, access to or use of any information obtained from Face Recognition Technology will not be unlawful provided that the Department did not request or solicit it and provided that that Department logs its receipt or access in its Annual Surveillance Report.

Posted

Time to read

7 Minutes

Nor is San Francisco alone in the US in seeking to regulate FRT. In announcing the passing of the legislation Aaron Peskin noted that it was in fact the seventh city to attempt some form of regulation. Oakland City Council’s Surveillance and Community Safety Ordinance 2018/04/26, for example states that City staff must obtain City Council approval prior to accepting funds for, acquiring or agreeing to acquire, share or use any surveillance technology (including facial recognition) (9.64.030.1) and that this process must involve the submission of a Surveillance Use Policy and Surveillance Impact Report for the technology (9.64.020). The City Council then has to make a determination that the benefits to the community of the surveillance technology outweigh the costs and that there is no less restrictive means that can be used (9.64.030.2). Every year City staff must present a written Annual Surveillance Report for Privacy Advisory Commission review, on the basis of which the Commission is to decide whether the benefits of the technology still outweigh the costs and whether civil liberties are still safeguarded (9.64.050). Violation of the Ordinance constitutes an injury and any person subjected to surveillance in breach of the ordinance is entitled to $1,000 or $100 per day of violation, whichever is greater. Berkeley, similarly, has enacted Ordinance No 7,592-NS, the Surveillance Technology Use and Community Safety Ordinance requiring City Council approval, Surveillance Use Policies, a cost-benefit analysis by the City Council and ongoing annual reporting, though this does not, unlike the Oakland legislation, confer any rights on anyone other than the City Council or provide any causes of action for individuals.

While all these forms of legislation concentrate on the use of facial recognition (and other surveillance technology) by public authorities, Sen. Reuven Carlyle (D-Seattle) has introduced a Bill in Washington which would restrict use of facial recognition technology even by private entities where it is to be used to make decisions producing legal effects such as denial of services, support, or housing, as well as its use by public entities for purposes such as criminal justice. (S 14). These restrictions include, in wording not unlike the GDPR on which it is modelled, ‘meaningful human review prior to making final decisions’ (14(1)), the prohibition of unlawful discrimination (14(3)), and notification that such technology is being used (14(4)). S 16(8) would also require the office of privacy and data protection to conduct an analysis on the public sector use of facial recognition by September 30 2023. The Bill is currently before the Senate Rules Committee for a third reading.

Interest in these US developments is particularly strong in the UK, given that there is apparently no similar reticence here. The Metropolitan Police is trialling use of Live Facial Recognition (LFR) technology, testing it in a range of environments including public events and crowded spaces. It has so far been used nine times, including at Notting Hill Carnival in 2016 and 2017, Remembrance Day in 2017, the Port of Hull Docks (assisting Humberside Police) in 2018, at the Stratford transport hub for two days in June and July 2018, on Christmas shoppers in Soho, Picadilly Circus and Leicester Square in 2018, and in Romford town centre on 31st January. LFR is also being trialled by South Wales Police and Leicestershire Police, while Liberty reports that it is also being used in Glasgow and Heathrow airport has its own trial. The technology used by the police in the UK, US and India is NeoFace, developed by Japanese Company NEC, while others in the US and China use software developed by Amazon or the Chinese company SenseTime.

In the UK, use of the technology is covered by various pieces of legislation. Article 8 of the European Convention on Human Rights requires respect for private and family life, and thus under the Human Rights Act 1998 the police can only interfere with this right to the extent that is proportionate and necessary. The Met police take the view that LFR is compliant with this on the basis that it is ‘less intrusive than other methods’, and does not rely on information sharing with other agencies. An Advisory, Consultation and Oversight Group (ACOG) will be held in order to ensure full consultation and an open and transparent decision-making process relating to the trials, and the Met Police considers that the trials are necessary in order to evaluate comprehensively the extent to which LFR presents a viable policing tactic. Part 3 of the Data Protection Act 2018 also governs the trials, but here too the Met states that their use of the technology is compliant on the basis that it is lawful and fair, and complies with the requirements of proportionality, data minimisation and security. The trials are also stated to be in line with the Surveillance Camera Code created under the 2012 Protection of Freedoms Act, and subject to oversight by the Surveillance Camera Commissioner and Biometrics Commissioner. Finally, the Met also states that it responds as required to requests under the Freedom of Information Act 2000 relating to the use of LFR.

Others, however, are not so confident. Although in October 2018 Steve Wood, deputy Commissioner at the Information Commissioner’s Office (ICO) stated that ‘we are not yet at a point where we can see the law has been contravened’, in a blog post in May 2018 the Information Commissioner Elizabeth Denham had stated that FRT would be a ‘priority area’ for her office and stated that she had written ‘to the Home Office and the National Police Chiefs Council (NPCC) setting out [her] concerns.’ ‘Should my concerns not be addressed’, she had continued, ‘I will consider what legal action is needed to ensure the right protections are in place for the public’, and indeed in December 2018 the Telegraph announced that the ICO had launched an official investigation into police use of FRT. Nor is she alone. Big Brother Watch in May announced that it had raised £10,000 which it would use to proceed (in conjunction with Baroness Jenny Jones) in applying for judicial review of any decision by the Met Police to roll out use of the technology following the trials, and similarly Liberty are, in conjunction with Cardiff resident Ed Bridges, seeking a similar review of South Wales Police’s use of FRT, to be heard from 21-23rd May.

The reasons for these concerns are well known. Big Brother Watch’s own Freedom of Information campaign discovered that the Met’s system wrongly matched people 98% of the time, and two deployments outside the Westfield shopping centre in Stratford had a 100% failure rate, misidentifying, among others, a 14-year-old black schoolboy. This was perhaps not unrelated to the fact that Inioluwa Raji and Joy Buolamwini of the University of Toronto and MIT respectively found that Amazon’s Rekognition technology had reduced accuracy in identifying gender, particularly in relation to the darker-skinned female subgroup, and the BBC reports that minutes from a police working group reveal that the police were aware of similar concerns with their technology. And in the US the notorious example is of the misidentification of 28 members of the US Congress.

Many of these concerns of course derive from the data on which the systems are trained. Raji and Buolamwini cite various technology providers as pointing out that systems are only as good as their training data, and a study of South Wales Police’s system by Bethan Davies, Martin Innes and Andrew Dawson of Cardiff University (which did not examine the issue of racial bias) found that the quality of images used for matching was highly significant. It is also clear, however that inaccuracy is not the only concern. In announcing the passing of the legislation San Francisco supervisor Aaron Peskin noted that ‘even if the technology is ultimately perfected, facial recognition technology is uniquely dangerous and oppressive’.

But is it unrealistic to suggest that if the technology could be perfected, the voices in favour of its use might grow stronger? As the Met Police point out in justifying their own trials of the technology, FRT has the potential to save time and money (not unimportant in a time of police cuts), and to be more effective than existing techniques. Researchers at George Washington University found that out of 76 jihadist attacks in Western Europe and North America in recent years, more than half involved perpetrators who had been on a security service watch list, while the Intelligence and Security Committee condemned MI5’s handling of the case of Salman Abedi, who perpetrated the Manchester Arena attack. If the technology could be made to work in an unbiased and accurate fashion to identify such known individuals as they approached their target venue, it would be harder to argue that an outright ban on its use was proportionate and necessary. Indeed, even at present, while the technology is in its inaccurate infancy, of the pieces of legislation discussed above only San Francisco’s goes this far.

And while it is the San Francisco ban on facial recognition that has grabbed the headlines,  even Aaron Peskin regarded the Ordinance’s ‘stop secret surveillance’ title as a bit of a misnomer. ‘It is’, he said, ‘really an ordinance that is about having accountability around surveillance technology… [with the exception of FRT] It is actually not designed to stop the use of any technologies that we currently employ or may use in the future. The fundamental thrust of the law…is to ensure the safe and responsible use of surveillance technology.’ Sure enough, the rest of the legislation, like that of Oakland, seeks to govern the way in which decisions to use surveillance technology of all kinds (including other kinds of automated recognition such as for licence plates) are made, and S1(f) of the Ordinance provides that ‘Legally enforceable safeguards, including robust transparency, oversight, and accountability measures, must be in place to protect civil rights and civil liberties before any surveillance technology is deployed’. This is echoed in the preamble to the Oakland Ordinance and in 2.99.010(G) of the Berkeley legislation, while the proposed Washington legislation would require FRT providers to make their software available for independent testing (14(5)). This accountability is of course in part human; (for example the San Jose Police Department apologised for secretly buying a drone for its bomb squad). But it will also need to be technological, so that we understand how the systems are working and what has gone wrong if they fail in either direction. Thus although the current focus may be on the San Francisco FRT ban, for the rest of the world, and for all other forms of surveillance technology it is this focus on accountability in functioning and use that will enable us to strike the right balance between safety and liberty.

Share