IACSP Interview with Acuant’s Paul Townsend, CISSP
July 13, 2022
Originally published in the June 2022 issue of IACSP.
“In a new interview we speak with digital identity expert Paul Townsend, CISSP of Acuant, a GBG company. Here we discuss the key changes to the fraud landscape impacting US government agencies, the challenges faced and the digital identity equation that must be solved and how evolving identity verification tactics, methods and standards are setting a new stage in the field.
Q: How has the fraud landscape changed in recent years?
As the number of U.S. citizens interacting with government agencies online grew significantly due to the Covid-19 pandemic, the door to fraud was opened, extending new opportunities to criminals. Many of the programs introduced as a way to ease pandemic-related hardships were prime targets for fraud, particularly unemployment schemes targeting states across the country.
Unfortunately, the lack of solutions and processes in place for identity verification made it easy for criminals to easily perpetrate fraud. Although some states were more prepared than others, even those with processes in place weren’t doing a thorough job and simply verifying static information, such as home addresses or dates of birth. In many cases, records were not cross-referenced to ensure that the person filing the unemployment claim was still alive.
We saw the same with PPP loans as efforts were made to get money into people’s hands as quickly as possible, which made it easier for criminals to take advantage of the system. There was a lot of money going out the door with very little accountability – with over $80B subsequently identified as the low end of the cost of that fraudulent activity.
Q: What challenges do government agencies face in preventing fraud, particularly when it comes to identity verification?
The focus on privacy and data protection has intensified and will continue to do so however, government agencies face a double-edged sword when it comes to leveraging advances in identity verification technology and processes. The recent effort by the IRS to require “video selfies” be submitted by anyone seeking to access their tax records online was met with incredible concern and backlash, only to be quickly dismantled. However, most of the stated (unfounded) concerns regarding biometric bias and privacy caused enough “government uncomfortableness” to result in the removal of the technology without consideration of the facts provided by the NIST Interagency Report (NISTIR) 8280 – Face Recognition Vendor Test (FRVT), Part 3: Demographic Effects, published in December 2019, that debunked these concerns.
This shows us that a good deal of education needs to be done to properly resolve unnecessary fears related to biometrics. When done properly, as an added layer of identity verification, biometrics allow for a high level of assurance in verifying identities. However, fraudsters are unrelenting in their attempts to circumvent security measures and are now using advanced tactics, such as deepfakes, in their attempts to pass biometric measures. As such, adding liveness detection, particularly passive liveness detection as part of the biometric verification process, is a crucial step to preventing fraudulent identities from succeeding.
However, perceptions, accurate or not, often guide government decision processes. Due to the scrutiny of processing performed in the public domain, government agencies are often very concerned about the public’s interpretation of their efforts. This can result in a very effective risk mitigation technology being dropped from a program unnecessarily, leaving a large opening for fraudulent actors. With biometric matching processes, this often means a concern related to biometric bias (officially denoted as Demographic Differentials by the international standards community that measures this processing).
The fact is, when there is a one-to-one match of a facial image from an identity document that has been proven to be authentic and the person providing a selfie for a facial match to that document, the demographic differential, or bias, is minimal, and demonstrated to be lower than a manual human evaluation of the match. As stated in the International Biometrics + Identity Association (IBIA) explanation of the results presented in NISTIR 8280, “the most accurate high-performing verification algorithms (a one-one verification search where two images are compared to determine similarities of the faces) display both low false positives and false negatives; more than 50 tested algorithms have false non-match rates (misses) less than three per thousand, and false match rates (erroneous matches) less than one per hundred thousand, again, greater accuracy than humans could ever achieve.”
Q: Do you think greater adoption of advanced identity verification methods and technology is in the future?
Government leaders know that technology is critical for the future and can have a positive impact on current and future initiatives but budget constraints and the lack of a clearly defined use case for a new technology often slows down adoption.
However, the conversation around national digital IDs is not new. Governments around the world have expressed both interest and concern, in roughly equal amounts, about adopting these in their countries. And still, government entities continue to struggle with determining the best methods for approaching identity management on a global scale.
Beyond the obvious need to standardize all identity cards or processes, the comfort level with technology advancements, such as biometrics, differs from country to country. Setting standards for the global ID verification process is no small task and there is a great deal of work to be done.
Looking at the U.S., as more Americans grow comfortable with verification methods, such as iris scanning or facial recognition software, we’ll see greater adoption of biometrics. Already, airports have adopted biometrics for frictionless security point transit and airline boarding processes, as well as to support immigration and emigration processes. And it’s proven to work —a newly introduced facial recognition system helped catch an impostor at Dulles International. A 26-year-old man traveling from Brazil with a French passport was flagged by the airport’s facial recognition technology, which was put in place just days prior. Upon a secondary check, he was found to be from the Republic of Congo and impersonating the man whose picture was in his passport.
Q: What are the biggest vulnerabilities for government agencies when it comes to fraud and what needs to be done to mitigate risk?
The biggest vulnerabilities related to identity fraud are in vetting the identity of the individual seeking access to a service or offering. This is especially true for remote (non-in-person) vetting processes. If the documents used as evidence for the proof of identity (or any associated attributes) either cannot be or are not authenticated or if the individual cannot be strongly linked (bound) to the evidence, a significant risk exists that cannot be mitigated by downstream processes. Before a person is onboarded or confirmed for additional service offerings, they need to prove their identity using risk mitigation mechanisms appropriate to the value of that access. There are even situations where it would be appropriate to re-authenticate an individual at the time of the transaction in order for them to perform a higher value transaction or prove that they are the actual owner of the travel document they have presented to cross a border using a self-service process.
The risk mitigation techniques employed to address these vulnerabilities are document authentication against a robust database of document references that can be used to evaluate the authenticity of the presented document and/or a cryptographic authentication process that can evaluate the authenticity and contents of digitally signed, government-issued credentials or their digital equivalents (such as ePassports and Mobile Driver Licenses). This process establishes the trustworthiness of the identity document itself. The presenter can then be tied to the identity document through Facial Recognition Matching to prove they are the owner of the authentic document and not simply a lookalike. For remote processes, an additional mitigation would be to use a presentation attack detection tool to ensure that the person in the selfie is live, not wearing a mask and not simply holding up a picture of the face to be matched or streaming a video to the remote process.
Used in combination, these mitigation steps establish a well-vetted trust anchor for the identity and significantly reduce the fraud attack surface. Even when used individually, each will assist in the reduction of fraud as bad actors typically won’t have the desire to try and beat the risk controls, they will simply look for a way around them—or go elsewhere.
Q: How is technology helping expedite border crossing and reduce cross-border crime?
Speed and accuracy are important to ensure seamless movement of travelers through border checkpoints and technology plays an important role in verifying people are whom they claim to be.
Law enforcement has long identified document fraud as the essential element of other criminal activities such as human and drug trafficking, money laundering, fraud, and terrorism. The growing sophistication of these criminal groups makes it imperative to employ the proper tools and processes to combat cross-border crime.
The most common types of document fraud continue to be the use of high-quality counterfeit documents, forged or modified documents and abuse of genuine ID/travel documents by lookalikes. Each type of document fraud requires the use of different techniques to mitigate the risk of its successful use, so deploying a multi-layered security approach will provide the most complete coverage.
Countering these threats requires the integration of intelligence channels, flash channels, identity document training/scanners/assessment software, operational training, and combined operations. This is another area where biometric technology has proven to be a valuable tool by determining the match of the facial image captured of the traveler against an image provided from their authenticated travel document. This technology allows the border control agency to set acceptable match thresholds to prevent the “rental” or sharing of documents by lookalike travelers. As the technology has progressed, additional metrics such as liveness have been incorporated to ensure that automated controls or self-service kiosk and mobile platform deployments cannot be spoofed.”