Facial Recognition systems not ready for Law Enforcement use

william smith
7 min readMay 25, 2018

It was only two weeks ago, in an article ( https://medium.com/p/c6eaa7e000fb/edit ) for the web site, “The Humanists of Our Generation” I wrote;

“Surveillance States always arise in reaction to a threat of terrorism. Perhaps the most iconic story of mass surveillance is George Orwell’s 1949 novel Nineteen Eighty-Four, which depicts a dystopian surveillance state (Oceania) created “to protect” it’s citizens from all types of evil.”

At the time I wrote the Medium.com piece I never thought I’d be coming back to the topic of surveillance so soon but just today it’s being reported that “More than two dozen civil rights organizations are calling on Amazon CEO Jeff Bezos to stop selling its facial recognition technology to the government, according to a letter made public Tuesday by the American Civil Liberties Union ( ACLU).” ( see ACLU letter https://www.aclunc.org/docs/20180522_AR_Coalition_Letter.pdf )

Amazon markets it’s technology under the name Rekognition” and innocuously describes it as a way to provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content but “Rekognition” is not innocuous by any means. It’s being sold to businesses and government agencies who generally use it in conjunction with surveillance technology like in the following picture.

Facial recognition systems identify people by processing their digital images if their facial recognition identity has alao been stored in a database.

The system takes advantage of

  • digital images or still frames from a video source,
  • which are processed through the facial recognition algorithm which extracts data of
  • facial characteristics like position and shape of eyes, nose, cheekbones and jaw. And
  • Measures distance between these characteristics and
  • maps data extracted from the the source image to facial data stored in a database.

Amazon claims it’s “system” can be useful in identifying people in crowds like airport terminals, railway stations, etc. Facial recognition systems can capture multiple images in a second, compare them with pictures stored in a database and produce results of it’s comparison, however.

While there are a half dozen or so aspects to a “Facial Recognition system”, Amazon’s Rekognition API is only the algorithm part. Amazon is offering a service (a.k.a. API) that allows a customer to provide pictures/video, taken by the customer cameras, to Amazon so Amazon can run the pictures through it’s Algorithm and compare the results to pictures stored in a database also provided by the customer. So Amazon is

  • not taking the original source pictures
  • or maintaining the database of pictures against which the results of it’s analysis are compared.

Both of these characteristics create significant quality control issues related to any analysis produced by Amazon’s Rekognition software. That lack of quality control should be of concern to any law enforcement agency using Amazon’s service.

The ACLU writes;

“This product (i.e. Rekognition) poses a grave threat to communities, including people of color and immigrants, and to the trust and respect Amazon has worked to build,” the letter reads. “People should be free to walk down the street without being watched by the government. Facial recognition in American communities threatens this freedom.”

All the ACLU writes is certainly true from a civil liberties perspective but perhaps it’s the accuracy and quality of Amazon’s system that creates an even greater concern.

A Chain of custody (CoC), refers to the chronological documentation or paper trail that records the sequence of custody, control, transfer, analysis, and disposition of physical or electronic evidence. There are at least two points along the Amazon process where the chain of custody is broken:

  • Original source pictures are not within Amazon’s control
  • Pictures in database, against which Amazon’s algorithm makes comparisons, are not within Amazon’s control

The Amazon app is not the only face recognition system that presents problems of custody. This is a video of “NEC’s “NeoFace Watch” Surveillance/Facial Recognition technology.

According to NEC, it’s “NeoFace Watch matches faces from video surveillance against the “appropriate watch list databases” and raises real-time alerts.” Of course NEC’s claim begs two questions:

  1. What is the meaning of “appropriate watch list databases ( what is the source of pics in the database?)
  2. How are “real-time alerts” raised? (who is alerted and to what degree?) What are they authorized to do on the basis of alerts? ( detain, remove, arrest, etc.)

Both the Amazon and NEC apps would likely fail a standard chain of custody test. They need to rely on demonstrating the certainty of their results to claim their reliability and worth.

A. Dutta, R. Veldhuis and L. Spreeuwers, with the Department of EEMCS ( Electrical Engineering Mathematics and Computer Science), at University of Twente, Netherlands, have developed what they call an Image Quality Assessor (IQA) which can predict performance of a Facial recognition system “even before the actual recognition has taken place”.

The researchers explain that “Given that practical face recognition systems make occasional mistakes … there is a need to quantify the uncertainty of decision about identity. The researchers “are interested in not only the verification decision (match or non-match) but also in its uncertainty.

A diagnostic test — whether facial recognition or medical — yields four outcomes of interest. The four outcomes can be illustrated using a standard 2x2 table shown below:

Apparently vendors of commercial off-the-shelf (COTS) face recognition systems ( e.g. Amazon’s Rekognition and NEC’S NeoFace ) provide a “Receiver Operating Characteristics (ROC) curve” which characterizes the uncertainty of the decision about identity at several operating points in terms of trade-off between false match (i.e. False-Positive) and a false non-match (i.e. False Negative) rates.

A receiver operating characteristic curve, i.e. ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system, like facial recognition, as its discrimination threshold is varied. In the case of Facial recognition systems an ROC curve can illustrate it’s accuracy in terms of the system’s likelihood of producing false-positive (i.e. declaring a match when none exists) or false- negative ( not declatring a match when one exists) results.

Hypothetical receiver operating characteristic (ROC) data can be represented as the points labeled “a,” “b,” and “c” represent three pairs of hit and false alarm rates associated with different cutoffs as illustrated below.

The term sensitivity is used to refer to the number of people with the disease who test positive (true positives) divided by the total number of people tested who have the disease. The term specificity is used to refer to the number of people without the disease who test negative (true negatives) divided by the total number of people tested without the disease.

Usually, a vendor supplied ROC represents “recognition performance” that the face recognition system is expected to deliver under ideal conditions. In practice, the ideal conditions are rarely met and therefore the actual recognition performance varies from certain to uncertain. For example, a verification decision made using a frontal image with even lighting entails more certainty than a verification decision carried out on mugshots taken at an angle, captured under poor lighting conditions.

A test with perfect discrimination (no overlap in the two distributions) has an ROC curve that passes through the upper left corner (100% sensitivity, 100% specificity). Therefore the closer the ROC curve is to the upper left corner, the higher the overall accuracy of the test (Zweig & Campbell, 1993). In the case of a the Face Recognition system it means the higher the accuracy of the face recognition system is finding a true match on the input face to faces stored in the database.

The problem of producing the same levels of certainty and uncertainty of a facial match using different frontal comparisons is cleverly illustrated by Abhishek Dutta in his Phd thesis using the following three illustrations, which also illustrate the difficulty using both the Amazon and NEC algorithms because neither controls Facial image 1, taken by a camera, or Facial Image 2, stored in a database which very well may be a different frontal view than Facial image 1.

In their performance prediction model, the University of Twente researchers only considered two image quality features: pose and illumination. However, variability in the unaccounted quality space, formed by image quality features other than pose and illumination like resolution, capture device characteristics, facial uniqueness, etc. also exists.

As MIT’s Joy Buolamwini explains, a Facial Recognition System can also suffer from what she calls “Algorithmic Bias” when the database, against which input pictures are compared, is insufficiently robust.

The lower the curvature of the “Receiver Operating Characteristics (ROC) curve” along with higher Algorithmic Bias should give all, including ACLU and law enforcement agancies , concern with regard to the law enforcement application of any Face Recognition System, including Amazon’s Rekognition and NEC’s NeoFace Match.

__________________________________________________________________

Notes:

  1. http://money.cnn.com/2018/05/22/technology/amazon-surveillance-technology-aclu-letter/index.html
  2. https://thelinuxmaniac.files.wordpress.com/2015/04/dutta2014phdthesis.pdf,
  3. https://abhishekdutta.org/phd-research/
  4. https://en.wikipedia.org/wiki/Receiver_operating_characteristic
  5. https://thelinuxmaniac.files.wordpress.com/2015/04/dutta2014phdthesis.pdf
  6. Journal of Experimental Psychology: Applied © 2012 American Psychological Association 2012, Vol. 18, №4, 361–376 1076–898X/

--

--