I had the opportunity to be on a panel at an event hosted by IBM in New York last week focused on emerging fraud issues. Participants ranged from fraud managers at financial institutions to data scientists from various vendors. It was a fantastically open and interactive dialogue that brought up many issues, none of which we easily resolved. One of the really interesting conversations was around the use of biometrics and their impacts on privacy.
Of the more interesting tangents we went down was the notion of sharing known bad-actor voiceprints between institutions. We all recognized there are fraud rings hitting call centers, and it is a rather small number of people calling again and again. Wouldn’t it be great to have a voiceprint database of these individuals to shut them down right away across institutions?
This led to a lot of conversation about privacy laws and regulations, and what really constitutes personally identifiable information (PII). We delved into what biometrics really are, how consumers—and lawmakers—perceive biometrics, and the potential objections from legal and compliance departments. We came up with a lot of questions, but not a lot of answers.
So let’s break down some of these ideas:
There is generally a poor understanding of what biometrics actually are, how they work, and how they are stored. The reality is that a mathematical hash representing various biometric features is created and encrypted. It cannot be reverse engineered and cannot be tied back to an individual without the cryptographic key.
This led to a question: is a biometric identifier that can uniquely identify an individual in one direction (user to organization) but cannot be reversed (organization’s database to individuals) PII? And how does this interplay with existing privacy and data security laws and regulations?
It is counterintuitive, but a biometric signature is actually more anonymous that what we typically consider PII, such as name, address, date of birth, and tax ID. Even if one were to include the types of information used in knowledge-based authentication, such as mother’s maiden name, past addresses, or past cars owned, this information is still much easier to use, for good or bad, than a biometric signature. The biometric signature is a mathematical representation of physical attributes that are encrypted and can be matched to future example cases.
Are biometric signatures PII? Yes, of course, to some degree—accompanied by the right software and encryption keys, they can uniquely identify an individual. At the same time, stealing this information is useless for stealing an identity, breaking in to an account, or opening a new account. One cannot recreate a face or a fingerprint from a biometric signature.
The question remains from a legal and regulatory perspective how to classify this information—is an encrypted hash that can’t be used for anything but affirming identity PII or not? How regulators and governing bodies make decisions will impact how this information can be used and shared, for better or for worse.
There aren’t many answers now, but these questions will be asked and addressed through regulation and legislation, just not soon enough. Financial institutions have an opportunity to influence these decisions and demonstrate responsible and secure practices. Education is one of the most critical components in helping the public and government bodies understand the real risks, limitations, and benefits of biometric technology.