Biometrics is supposed to be one of the underpinnings of a modern authentication system. But many biometric implementations (whether that be fingerprint scanes or face recognition) can be wildly inaccurate, and the only universally positive thing to say about them is they’re better than nothing.
Also — and this may prove critical — the fact that biometrics are falsely seen as being very accurate may be sufficient to dissuade some fraud attempts.
There are a variety of practical reasons biometrics don’t work well in the real world, and a recent post by a cybersecurity specialist at KnowBe4, a security awareness training vendor, adds a new layer of complexity to the biometrics issue.
Roger Grimes, a defense evangelist at KnowBe4, wrote on LinkedIn about the National Institute of Standards and Technology (NIST) evaluation ratings. As he explained: “Any biometric vendor or algorithm creator can submit their algorithm for review. NIST received 733 submissions for its fingerprint review and more than 450 submissions for its facial recognition reviews. NIST accuracy goals depend on the review and scenario being tested, but NIST is looking for an accuracy goal around 1:100,000, meaning one error per 100,000 tests.
“So far, none of the submitted candidates come anywhere close,” Grimes wrote, summarizing the NIST findings. “The best solutions have an error rate of 1.9%, meaning almost two mistakes for every 100 tests. That is a far cry from 1:100,000 and certainly nowhere close to the figures touted by most vendors. I have been involved in many biometric deployments at scale and we see far higher rates of errors — false positives or false negatives — than even what NIST is seeing in their best-case scenario lab condition testing. I routinely see errors at 1:500 or lower.”
Let that sink in a moment.
In independent testing, many biometrics simply do not accurately deliver on their promise. On top of that, many vendors, including Apple (iOS) and Google (Android), make marketing choices in their settings, where they choose how stringent or lenient the authentication is. They do not want a lot of people being improperly locked out of their phones, so they choose to make it less strict, in effect giving a greenlight to device access by higher numbers of unauthorized people.
Remember those videos showing phones letting in the children or siblngs of a phone user when using facial recognition? That’s a big reason why.
Another key factor is theoretical accuracy versus real-world accuracy. Consider two popular phone authentication methods: facial and fingerprint recognition. In theory, facial recognition is much more discerning because it can consider a larger number of datapoints. In practice, though, that often doesn’t happen.
Have you seen any children or siblings getting phone access via fingerprint? Facial recognition has to deal with lighting, cosmetics, hair change and dozens of other factors. None of that is in play when using fingerprint recognition.
There is also a distance issue. With facial recognition, a device needs to be a precise distance from the face to read it accurately — not too close, not too far. I personally use an iPhone with Face ID and I typically see failure 60% of the time. I then adjust the difference a bit and — if I’m lucky — my phone will unlock. (Again, this is not an issue with fingerprints.)
Side note: why do many banking apps deal with check scans (yes, some companies still use checks) in a more sophisticated way? The app will typically tell you to “move the phone closer” or “move back” before it photographs the check image. Why can’t facial recognition do the same thing?
Don’t forget, too that from an authentication perspective, a lot of the biometric deployments are a joke. Why? Because when a biometric authentication fails, access defaults to a phone’s PIN.
In other words, if a thief wants to get around biometrics, all he or she has to do is fail once or twice and then deal with the easier-to-crack PIN. What’s the point? It’s clear that the major phone vendors use biometrics less for authentication or cybersecurity, than for convenience. It’s a way to access a device without having to type out a PIN.
As lax as that sounds, Grimes argues that the situation is likely worse. “The NIST tests are best-case scenarios. They are all hideously inaccurate. The security is overpromised in almost every situation,” he said in an interview.
Grimes also expressed concern about the unchanging nature of biometrics. If a password or PIN is compromised, it’s easy to generate a new password or PIN. Even a physical token can be replaced. So what happens if biometrics are compromised? You can’t easily change your face, retina, voice or fingerprint.
“Once stolen, how do you get them back?” Grimes said, adding that reverse-engineering biometric data is quite possible.
The bottom line problem here is perception and characterization. These biometric efforts, as currently implemented, are little more than convenience. (Don’t get me wrong; as a naturally lazy person, I am madly in love with convenience.) But they’re offered as being tailored for cybersecurity. And as a result, users and technologists will rely on biometrics as a protective measure.
There are plenty of ways of deploying biometrics securely. Retina scans are usually secure and fingerprints work well for people that have properly scannable fingerprints. But voice biometrics, currently used by a variety of financial institutions, remain too easy to fake.
This brings us back to settings decisions. If the settings are sufficiently strict, even facial recognition can become a security mechanism. In short, biometrics is a fine convenience. As a security defense, most of today’s implementations don’t cut it.
Copyright © 2022 IDG Communications, Inc.