Facial Recognition

By Jonathan Jeckell

Using face recognition on security feeds has gone from being heralded as an Orwellian tool of the Chinese surveillance state to a faulty tool that is worse than useless in police hands.  The Chinese government has been boasting about the ability of their CCTV network to instantly identify people and aid their security services in immediately identifying people.  In one case, the Chinese government claimed they captured a man wanted for “economic crimes” out of a crowd of 60,000 over 100 kilometers from his home.  Human Rights groups have voiced concern that China is using the technology to build a dystopian police state, particularly with claims that this technology is used to intrusively monitor ethnic minorities, such as the Uighurs in Western China.  

This technology is also being deployed outside of China.  US Customs has introduced face recognition as a way to speed up the lines at passport checks at a handful of US airports.  New York police want to link their face recognition system to all drivers’ license photos.  The mayor of London wants to use face and palm recognition to “speed up” access to public transportation.

b_942018pu2zmFHlUu0bpenv3DA5fIBzo.png

This technology is being pursued even more aggressively by private organizations.  Social media companies obviously use face recognition for their marketing and advertising purposes.  Some ad companies even want to reward people for allowing access to the cameras on their devices to gauge their reactions to ads.  Ticketmaster is plugging face recognition into their Presence digital pass system, using digital tokens rather than paper tickets to grant access to live events.  If it works as claimed, authorized people will be scanned on the move (they won't need to stop and look at a camera), which would speed up entry for large crowds into live events as lines move quickly.   Two companies, Terrogence, and Verint (founded by former Israeli intelligence agents) promises biometric security (including automatically identifying dangerous or blacklisted individuals on a customized watchlist that enter corporate grounds.  They harvested more than 35,000 images and videos from numerous public and closed sites to create their list.  Even private individuals have access to this technology with smart home security cameras and even homebuilt devices with customized face recognition features.

Numerous groups are also trying to stop the deployment of this technology for security purposes, citing unreliable or biased results.   Police in the UK (South Wales) recently admitted that they tested face recognition to find criminals from CCTV footage, particularly on the 170,000 people who attended the Real Madrid and Juventus soccer match.  It identified 2,470 potential criminals, but 2,297 (92%) were false positives.  None of the people misidentified via a false positive were arrested. More recently, the ACLU claimed it falsely matched 28 members of Congress with mugshots, using the face recognition system sold by Amazon to law enforcement for $12.33.  Moreover, they claimed the false matches were disproportionately non-white.  

e9461fb352e13567255a29f25765b899.jpg

Face recognition technology can require an extensive collection of images to train the software, or just a few images of the target individuals to identify particular people.  Google, FaceBook, Apple, and others have access to massive collections of photos, which users helpfully tag, for them to identify people; the larger and more representative the dataset, the more accurate the results.  China has many compulsory ways to gather this data, but has also used flattery and other incentives to obtain images voluntarily, much the same as social media companies have elsewhere.  As mentioned earlier, many have criticized face recognition developed from social media platforms and used by law enforcement as very error prone for non-white people due to the disproportionate use, development, and testing of those platforms by mostly white people.  Recognizing this, IBM has released 36,000 annotated face images evenly distributed among genders, ethnicities, and ages (including different hair styles, facial hair, etc.) to help train more universally reliable and representative systems. 

As flawed as face recognition systems can be, they are extremely useful and usually reliable for a huge amount of tasks, providing more quick and convenient identification for access to services, and finding criminals or other wanted individuals that police could not.  Though the Welsh experiment had a 92% false positive rate, the system identified 173 people who police missed.  Human beings are also terrible at identifying people they don’t know well and have proven to be terrible witnesses.  Face recognition, coupled with other ways of identifying people, and used within the framework of established civil rights can make interactions with police and security services even more fair and just, not less by apprehending the right person up front without rounding up everyone who bears the slightest resemblance to a suspect.  Imagery analysts and others who look at faces can only sustain their attention for short periods of time, while automation can work tirelessly.  As such, a project in India allegedly helped Delhi police find 3,000 missing children in 4 days, which would’ve taken vastly longer (if ever) using humans.

The potential for artificial intelligence with cameras promises even more in the future.  A Japanese system can allegedly spot suspicious behavior indicative of shoplifting and alert the storeowner.   Other researchers are training cameras to spot signs of violence.  Of course these advances bring with them more concerns about misuse by authorities, such as breaking up protests or retribution for participation in certain activities.  But at the root this entire issue is something we’ve faced for generations of citizen and employee relations, but with eyewitnesses, albeit today’s systems synthesize bits of information far more effectively and store it permanently.