Risk and reward: increases in the use of facial recognition software

Asya Sonnichsen

In recent years facial recognition technology has moved from being a staple of science fiction to an everyday reality. Technology firms can now collect and process billions of images from the Web and deliver analysis virtually instantly to law enforcement agencies, businesses and individuals. This has created a serious issue for anyone anxious about their privacy while simultaneously presenting a remarkable opportunity for investigators.

Facial recognition technology is already extensively used by governments and law enforcement agencies to identify individuals. In China, the Sharp Eyes programme uses over 200 million surveillance cameras to track citizens, rank their trustworthiness, and penalise them. Those penalised may be prevented from travelling and their children could be denied places at leading schools.

No longer the exclusive domain of law enforcement but available to the resourceful investigator, tools such as FindClone and Yandex match faces online with remarkable accuracy, locating social media profiles on networks like VKontakte and Facebook.

The scope of these tools is rapidly increasing, as is their application. The controversial facial recognition tool Clearview AI has scraped over three billion photos from the Web and is used by more than 600 law enforcement agencies in North America alone. This technology was recently pitched to US state agencies to prevent the spread of COVID-19 by tracking quarantine evaders and supporting contact tracing.

The potential applications for such tools in the hands of investigators are game-changing: tracing concealed assets by unearthing photos of investigation targets repeatedly staying at a luxurious villa, identifying individuals observed during surveillance or proving an association between two individuals based on photos of them together on social media. Such tools could improve efficiency and speed in investigations while reducing cost. While some individuals are extremely security conscious, it only takes one friend or family member to post a photo of them to a public Instagram account and they are exposed.

Nevertheless, this technology carries risks. In the case of Clearview AI, its sale and usage are unregulated and a recent data breach revealed that despite claiming to only provide access to law enforcement, it had engaged with private companies and individuals globally. The lack of transparency around the software means that a vast amount of biometric data may end up being used improperly.

The technology is also not regulated, and Clearview’s claim of a First Amendment right to public information has been challenged by US privacy lawyers. Twitter, Facebook, and Google have all sent Clearview cease-and-desist letters, demanding it removes their photos.

Additional risks include that of false identification. Owing to algorithm issues with the technology, people of colour and women remain at risk of misidentification, which may lead to innocent parties being wrongly implicated in legal disputes or other matters to which they are not connected.

The possibility of client lists being leaked also presents a serious PR hazard to say nothing of the blowback for investigators if specific search histories were revealed.

The legal, moral and ethical issues associated with facial recognition technology have prompted several US cities to ban it and take action against any violators. In May 2019 San Francisco voted to prohibit police and other government agencies from using facial recognition. Other cities soon followed. In May 2020, after being sued by Illinois, Clearview AI announced that not only would it terminate all of its contracts in the state, but that it would no longer offer access to its app to private companies or non-law enforcement entities.

While use of the technology by law enforcement agencies had been considered mainstream, the Black Lives Matter movement and other civil liberties organisations have recently spoken out against the lack of regulation and issues of inherent racial bias. In response several users and developers of the technology have sought to distance themselves from it. In Australia, the Victoria police department has stated it will no longer use Clearview AI. Furthermore, Amazon has banned any police forces from using its own facial recognition technology for a year. Microsoft has suspended the sale of its facial recognition product to law enforcement until nationwide regulation is in place in the US and IBM announced it would cease development of such technology.

Such complexities notwithstanding, there remains an appetite by many outside of law enforcement and governments to utilise facial recognition in marketing and security services. As the technology advances, having one’s facial and biometric data collected and indexed by authorities and corporations is becoming impossible to avoid.

The challenge for investigators is to ensure that we consider the benefits of the technology in improving investigations while assessing its potential risks and ethical implications.