How governments manage security risk with facial recognition technology

How governments manage security risk with facial recognition technology

To combat terrorism, criminality and dissent, governments are turning to advances in facial recognition to identify, track and analyse potential threats. This technology has the potential to reduce security costs but may entrench discrimination.

The panopticon

Last year, 25 people were arrested after being identified as criminals by real-time facial recognition systems at the Qingdao Beer Festival in China. Cameras placed at the entrances analyzed faces in the crowd and alerted police when they spotted faces matching those in the criminal database. Others were also turned away for past criminal behaviour; thus individuals seen as high risk were turned away based on their perceived threat. This use of real-time facial recognition systems illuminates the rising viability of the technology and the ways it can be used to control people and spaces.

Use of real-time facial recognition technology for managing security risks has been developing for several decades but it has only recently been employed effectively. In August, police in Germany installed facial recognition cameras at a train station in Berlin in order to track and arrest criminals or terrorism suspects. Real-time facial recognition has also recently been used in the US, UK and Russia, among others for security purposes. Research partially funded by DARPA, a governmental research agency within the US Department of Defense, even aims to make real-time facial recognition compact enough to fit into police body cameras.

Combating chaos?

In the current political climate, particularly in Europe, it is understandable that governments should want greater latitude in identifying, tracking, and analysing its populace. But tracking an individual is expensive – it takes 18 to 20 officers to track a suspect for 24 hours a day. According to the director general of MI5 in the UK, the scale and pace of terrorist activity in the UK has risen significantly and is stretching the limits of national security services. Security firms promise that facial recognition will reduce those costs and increase the number of people that can be tracked.

In countries with less liberal political systems the technology is also likely to prove an important tool for suppressing dissent. Research has shown that Chinese censorship doesn’t target critique, but rather collective action – facial recognition would help stifle dissidence on the ground more effectively by stripping protestors of any anonymity. China is already using it extensively in Xinjiang which is home to the Uighurs, a Muslim ethnic minority accused of fomenting separatism and terrorism.

The appeal of facial recognition technology for both liberal and illiberal governments is the opportunity to preempt some combination of crime, terrorism, or dissent.  By combining it with other databases, governments would be able to build a more comprehensive picture of individuals spanning online and offline habits. It would also be able to identify and track suspicious individuals in public spaces based on activities conducted online.

Illusions of impartiality

The dynamics in this new system increasingly asymmetrical. Identities will be understood by the state and organized centrally. The populace is unlikely to fully understand what kinds of information are being gathered or how they are classified as individuals. Even if one is stopped in the street by police, it may not be clear if the stop was random or based on a combination of their online and offline habits. This asymmetry of information makes it especially important that these systems have some degree of transparency and that legal challenges to the system are possible in cases of discrimination.

Ultimately artificial intelligence systems require human judgement to inform the criteria for identification of suspicious activity and individuals. This creates an opportunity for political agendas to slip in and increase targeting of already marginalized groups. Unless governments are intentional about reducing bias, artificial intelligence combined with facial recognition cameras could become a less visible version of the controversial stop-and-frisk policies.

About Author

Peter Hays

Peter is a London based analyst. He specializes in trade and regulation in the Asia Pacific region. He holds a MSc in Economy, Risk and Society from the London School of Economics and a BA in International Studies from American University.