July 31, 2019 By Mike Elgan 6 min read

Great security depends on the ability to know who is authorized and who is not. This applies to both physical and digital security. But how do you accomplish this?

Increasingly, one answer is facial recognition, which is improving rapidly thanks to artificial intelligence (AI) and related technologies. Hundreds of thousands of facial recognition cameras are being installed for the 2020 Olympics in Tokyo. The U.S. Transportation Security Administration (TSA) is planning to use the technology to “enhance security and the traveler experience.” Half of the casino operators in Macao are testing facial recognition to ban card counters and cheaters. A school in Sweden is even using it for daily roll call.

Facial recognition promises to usher in a post-password world. Smartphones and even desktop operating systems are implementing this kind of biometric authentication, and the best of these systems are extremely accurate.

Yet as facial recognition technology becomes better and more reliable, controversies and opposition to the tech are emerging. The question is: How should the bad news around facial recognition affect decision-making about biometric security going forward?

Recognizing False Negatives

Facial recognition seems to be getting a lot of bad press lately. Advocacy groups are emerging to oppose the technology, and activists in many countries are pushing for bans. In May, San Francisco became the first U.S. city to ban facial recognition by government agencies and departments, including the police department. Since then, two more cities have enacted comparable bans. The two main drivers of these bans are fears around bias and privacy.

There’s also some question about the hackability or spoofability of facial recognition, and stories are emerging that show how facial recognition authentication can theoretically be defeated. Another controversy has emerged over the sources of facial data for training algorithms; some researchers and organizations have been criticized because the faces used for the development of algorithms were pictures of people who did not give their explicit permission.

All of these concerns are valid and need to be addressed by lawmakers, researchers and the public at large. But they also confuse decision-making about the future of facial recognition in enterprise applications by creating the false impression of stalled progress or trouble ahead.

While city bans may have a chilling effect on enthusiasm for biometric authentication, they shouldn’t. These bans do not extend to private companies, but police departments and city government agencies only. The top concern is unfair policing. Activists claim, for example, that police facial recognition results vary according to skin color or gender, and they fear that women and minorities may be unfairly impacted — and research supports that claim. MIT researcher Joy Buolamwini, for example, found both racial and gender biases in the major recognition systems used by police departments.

Buolamwini’s research is often cited by opponents of police facial recognition programs as evidence that facial recognition is biased. But that same research also demonstrates that more and better data sets can overcome race- and gender-based differences by facial recognition systems. Most of the companies tested in her study have responded by welcoming her approach as a way to accelerate their existing efforts to eliminate skin color and gender differences in accuracy.

Besides bias, the general reliability of facial recognition has been questioned. Critics say that innocent people can too often be identified as criminals. Defenders deny the claim. Either way, it’s true that facial recognition isn’t perfect.

But this is another transient concern, relevant to yesterday’s technology, not tomorrow’s. Deep learning and neural networks are rapidly improving the accuracy of facial recognition systems. Soon, the best technologies will recognize faces with 99.99 percent accuracy (which, by the way, is far higher than humans can recognize faces).

Over the next few years, it’s likely that facial recognition will be nearly perfected, thus eliminating the bias and inaccuracy controversies. This creates another problem: privacy. Unlike a password, you can’t reset your face. The other unique quality about facial recognition is that the data (a picture of your face) can be captured at a distance or online with an ordinary camera or even a security camera.

This fear is based on the now-obsolete concept that it’s possible to move about in society without being photographed. The truth is that our faces are recorded with cameras every time we shop, go to the bank, drive or walk down the street, or fly on commercial airlines.

Smartphones have the public taking photos on a massive scale. Estimates vary, but it’s uncontroversial to say that more than 1 trillion photos are now taken every year. Many are uploaded to social media sites, where facial recognition is applied. Still others are backed up to private accounts on cloud photo sites, where facial recognition is also applied. Sites like Google Photos, which now has more than 1 billion users, encourage users to identify people in their photographs by name. In other words, the idea of keeping your face out of facial recognition databases is not practical or likely.

Don’t get me wrong: Privacy is incredibly important. But the road to privacy can’t have anything to do with hiding your face from the cameras or preventing images of your face from being processed for facial recognition.

It’s also becoming clear that the face-photo glut means that controversies around permission for photos that train algorithms are fleeting as well. Face photos are everywhere; there’s no shortage. And, in any event, the use of public photos for private research (where the faces are never publicly shown by the researchers) isn’t actually an invasion of privacy. Making private photos public invades privacy; using public photos privately does not.

Thinking Clearly About Facial Recognition

The function of, or purpose for, facial recognition can vary. Which brings us to the all-important distinction between identification, authentication and authorization. Here’s an oversimplified definition for each of these terms:

  • Identification: Figuring out who you are
  • Authentication: Proving who you are
  • Authorization: Granting or denying access

Some implementations of facial recognition can provide one, two or all three of these services.

As a simple example, a driver’s license theoretically uses “facial recognition” (by a human) for all three. If someone is unconscious in a medical emergency, fishing a license out of the wallet and matching face with photo enables identification — first responders can use it to figure out who you are. You can use a license for authentication as well — to prove who you are for verification on a social network or when using a credit card. And, of course, it’s useful for authorization — showing a valid, current driver’s license while renting a car is required to prove that you’re legally allowed to drive a car.

But unlike a driver’s license, AI-based facial recognition can be limited to performing only any one or two of these services.

Most of the controversy around facial recognition revolves around identification — when your face is associated with other data about you, such as your name and history (this can be a police record, medical record or location history, as examples).

Identification is where the controversy lies. But the main use for biometrics in enterprise security is not identification, but advanced authentication and authorization. Authentication is, by definition, personalized. When I log in to my phone using facial recognition, I’m proving it’s me, specifically, and since I’m the owner of the phone, access to the phone is granted.

But here’s the important point that is often missed: Authorization isn’t necessarily personalized. Facial recognition authorization can be based on a database of people who are not authorized — a blacklist. With this approach, if my own biometric data is not in the system at all, I can be granted access based on the fact that the system does not know me.

Even a whitelist approach, where authorization is granted when a face is recognized, can be nonpersonalized. By associating a positive recognition with “grant access,” but not with any personalized data, a facial recognition system can be nonidentifying, with no association between face and personal data of any kind (other than pass or no-pass).

The point is that facial recognition technology is not inherently violating privacy, or even personally identifying. That’s a matter of practice, policy or — in the case of municipal systems — law.

Facial recognition can assure privacy as easily as it can violate it. It can keep the wrong eyes out of your stolen smartphone, for example. It can even be used to protect you from surveillance. (This is a feature of even low-cost consumer home security monitoring systems, whereby it will record video only when it does not recognize a face.)

The bottom line is that controversies around facial recognition have little or nothing to do with deploying facial recognition for enterprise security. Most of the concerns are based on temporarily flawed technology, enterprise security is mostly about advanced authentication and authorization, and there’s nothing inherent to facial recognition that violates privacy.

The Real Challenges Facing Facial Recognition

While the public is focused on transient controversies, professionals should pay attention to the actual challenges associated with facial recognition technologies. The primary risk is that facial recognition will soon become so reliable and effective that organizations may be tempted to overdeploy it or rely on facial recognition exclusively for certain applications. It’s important to always maintain alternative methods for authentication and authorization for the foreseeable future.

In addition, while it’s true that there’s nothing inherently privacy-violating about facial recognition, it’s also true that keeping facial recognition from violating the privacy of employees, executives, partners and customers requires effort, intention and investment in solutions that support biometric privacy.

Another challenge with facial recognition in the short term is resistance by employees and customers who may have formed an opposition from all the negative news. Acceptance can’t be assumed or taken for granted, and it’s important to maintain sensitivity to the fact that many people feel uneasy about facial recognition. Deployment and training needs to be accompanied by clear communication about exactly how the technology is being used.

Finally, new AI tools are emerging to defeat facial recognition and other biometric solutions, and like any other security tool, these risks have to be guarded against indefinitely. As with all things related to enterprise security, complacency is not your friend.

More from Identity & Access

Another category? Why we need ITDR

5 min read - Technologists are understandably suffering from category fatigue. This fatigue can be more pronounced within security than in any other sub-sector of IT. Do the use cases and risks of today warrant identity threat detection and response (ITDR)? To address this question, we work backwards from the vulnerabilities, threats, misconfigurations and attacks that IDTR specializes in providing visibility into. As identity threat detection and response (ITDR) technology evolves, one of the most common queries we get is: “Why do we need…

Access control is going mobile — Is this the way forward?

2 min read - Last year, the highest volume of cyberattacks (30%) started in the same way: a cyber criminal using valid credentials to gain access. Even more concerning, the X-Force Threat Intelligence Index 2024 found that this method of attack increased by 71% from 2022. Researchers also discovered a 266% increase in infostealers to obtain credentials to use in an attack. Family members of privileged users are also sometimes victims.“These shifts suggest that threat actors have revalued credentials as a reliable and preferred…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today