As I anxiously awaited the last season of “Game of Thrones,” I found myself thinking about my favorite character from the series: the assassin who belongs to the mysterious cult of “Faceless Men.” Specifically, I thought about his ability to change his face and appearance at will, and how this character parallels the emergence of deepfake images and videos and the science of facial recognition.
After some cursory research into the creation of deepfake videos and a few of the forensic tricks used to distinguish real videos from the ones created by generative adversarial networks (GANs) backed by artificial intelligence (AI) algorithms, I considered how these deepfakes would impact the field of cyberthreat intelligence and the intelligence community as a whole. Then I came up with a couple of ways artificially intelligent systems could have a positive impact on the intelligence community.
The Obsession With Selfies and Facial Recognition
Is all of the suspicion around facial recognition technology backed by artificially intelligent systems merited? Here are some interesting points to ponder over:
- A report by The Telegraph noted that 1 million selfies are taken every day.
- The International Business Times analyzed selfie trends and found that millennials are expected to take more than 25,000 selfies over the course of their lifetime.
- In 2015, Google reported that users uploaded 13.7 petabytes of photos to its servers. After analyzing the labels for the photos, which are applied by a machine learning algorithm, Google found that roughly 24 billion of those pictures were selfies.
- The Daily Mail reported that 17 million selfies are uploaded to social media each week.
So what’s the problem with all this selfie-taking? Consider the following points about the artificially intelligent systems behind facial recognition:
- Individuals who are subjected to facial recognition never see the programmers behind the algorithms or the engineers behind the technology.
- There is a lot of talk about bias in facial recognition technology, particularly when it comes to distinguishing certain groups of people.
- The average individual does not understand the intent behind the use of facial recognition technology, and there is concern that the technology could be used to falsely accuse an individual of a crime they did not commit.
- There is no guarantee that the technology will be used for sound reasons rather than, for example, to conduct undue surveillance or systematically target specific segments of society.
Considering the concerns outlined above, it would appear that the argument against artificially intelligent facial recognition systems has less to do with safeguarding individual privacy and more to do with how the systems and technology will be used. After all, a person that is both culturally and psychologically obsessed with uploading their face to the web is likely not as concerned with privacy or the specifics of their facial geometry when they are putting it out there for everyone to see.
But what happens when selfies are decomposed, spliced and recomposed to form the face of a person who does not exist and never has? These so-called deepfakes could become a serious problem in many respects, especially when it comes to open source intelligence collection (OSINT).
Deepfake Intelligence Dossiers: Sifting Through a Sea of ‘Faceless Men’
It is a common practice within intelligence communities to create dossiers — also called “intelligence cards” in the cybersecurity community — on threat actors. The use of artificially intelligent algorithms to create deepfake dossiers could be both a blessing and a curse to the intelligence community.
On one hand, it is possible for an adversary or threat actor to create dozens or hundreds of fake dossiers on themselves to throw off analysts who might be hunting for them — assuming the threat actor is a person, of course. On the other hand, it is also possible to hide the true identity of an intelligence officer behind dozens or even hundreds of fake online identities, photographs, videos, electronic paper trails and voice prints. Unless GANs are trained to both create deepfakes and identify them just as rapidly, there is no way to know who a threat actor truly is beyond close personal contact — just like the Faceless Men.
The current two-dimensional facial recognition methods in use today — that is, a person holding some kind of government document such as a driver’s license up to their face while staring into a camera — will not solve the problem. It makes the problem worse because now there are at least two forms of PII offered up in one camera shot or video, and that makes identity theft far easier for a criminal to pull off.
What Biometric Data Is Considered PII?
Perhaps there should be a bill to criminalize the creation and distribution of deepfakes, or at least some realistic legislation around its creation and use. So much of a person’s life is online now, from social media to blogging, renewing a driver’s license, paying a parking fine, signing up for college courses, working from home — the list goes on. Unless every entity a person conducts online business with over their lifetime has a way to identify fake photographs and voice passwords in the next two to three years, databases will quickly be filled with people who don’t actually exist.
Nearly every entity — federal, state, civil, county and corporate — requires some kind of photo ID to sign up for a new account or service. However, only a handful of countries consider a person’s facial geometry or voice signatures to be a form of personally identifiable information (PII), and no country currently considers a person’s gait to be PII. In these three areas alone, humanity is already behind artificial intelligence and its uses. It is time for the cybersecurity community to apply a bit of rational thinking to catch up.
How to Prepare for a Future of Faceless Threats
Below are a few steps to help businesses, government agencies, universities and research institutions get started developing, deploying and using facial recognition technology.
- Match facial recognition patterns, such as facial geometry, to multiple sources when training AI technology and the associated algorithms and developing the corpus — especially when facial recognition is used for OSINT purposes. If the AI cannot find at least three exact matches or patterns outside of the original image source, consider the image to be false.
- Develop federal- and state-level legislation to prevent the dissemination of deepfake images for the purposes of blackmailing, extorting or harassing individuals.
- Consider closing the AI systems to further training once the machine learning algorithms demonstrate a high level of accuracy to prevent future corruption of a well-established corpus.
- Allow individuals to opt into facial recognition programs, much like air travelers can opt into the TSA precheck program. This would be particularly useful if programmers want to develop algorithms to track changes in facial geometry due to aging and illness.
- Allow manual intervention of false positives for individuals who may have had significant facial reconstructive surgery or other medical procedures to validate the individuals’ identity.
- Implement 3D facial recognition technology that utilizes infrared light to scan the user’s face and is designed to detect small nuances, such as smile lines, frown lines, crow’s feet, skin texture and genuine smiles.
- Always combines facial recognition with at least one other form of authentication (e.g., voice print, PIN, one-time password).
- Always allow for and build in a manual override for any artificially intelligent system or technology in use so that a human operator can take over and guide the system appropriately.
The emergence of deepfake photos and videos has already shown what skilled programmers and hackers can do with personal data. Biometric technology is not the area to skimp on cost, highly trained personnel or proven products, nor the area to forego a deep understanding of the technology and how it can be used for nefarious purposes.
If your company is using or planning to use facial recognition and other biometric technologies in the future, set aside an operating budget for a small team of data scientists who can analyze the biometric data that has been collected to verify all photos, videos, voice prints, palm prints, fingerprints, iris and retinal patterns, and keystroke patterns were indeed captured or recorded from a real person — rather than the result of an enterprising actor using artificially intelligent algorithms to create people who do not exist.
Management and Strategy Consultant, IBM