Android’s new biometric spec for 'strong security' is anything but
Google has released new biometrics specs for Android devices, with the top-level “strong security” option requiring only “a spoof and imposter acceptance rate not higher than 7%.” But most biometrics specialists say that for something to be considered “high security,” that imposter and acceptance rate should be closer to 1%.
That prompted me to ask Google for comment. Google replied by emailing an anonymous statement to be attributed to nobody that doesn’t directly defend the levels it chose — but did say security decisions are ultimately up to each handset manufacturer.
“Android hardware OEMs alone choose the tier of biometric strength they implement in their products and for device unlock," Google said. "Hardware OEMs also ensure that the security of their product can meet Android Compatibility Definition Document (CDD) requirements. We are constantly working with the Android OEM ecosystem to raise the bar for user security. With a global Android OEM ecosystem, we take a balanced approach on issuing new requirements to ensure the Android OEM ecosystem can adequately prepare for and implement stricter requirements at-scale, while also ensuring requirements enable OEMs to protect users.”
That would be a reasonable position had the company not created three distinct categories: Class 1 for “convenience,” Class 2 for “weak” security and Class 3 for “strong security.” Why not give the handset manufacturers the choice of which one to use, but to make the strong security option truly strong?
Google said it “strongly recommends disclosing the biometric class of a biometric and the corresponding risk of enabling it to users for better transparency.” Therein lies the problem. By labeling this level as delivering strong security, it will likely mislead users into thinking that they are far more protected than they really are.
To Google’s credit, it also said “CDD requires that Android OEMs provide language in the user onboarding for any biometrics to indicate that this method of authentication is less secure than PIN, patterns and passwords.” True, or it could set specifications that actually make biometrics stronger. Wouldn’t that be the better route to take?
“This ‘strong security’ benchmark is laughably bad," said Jay Meier, senior vice president of North American operations for FaceTec. “To describe this benchmark as 'strong' should qualify as fraudulent. Seriously. This is what many will use in conjunction with the FIDO PassKeys. It’s like Android wants to enable identity theft and cybercrime.”
Google’s new specs “don’t align with biometric expectations and it doesn’t jibe with industry best performance," said Anonybit CEO Frances Zelazny. "That is a very very high error rate.”
For enterprise CISOs, there's an even bigger issue. For several years, enterprise security has been seriously evaluating a way to move to passwordless options, a.k.a passkey — typically as part of a slow shift to a zero-trust environment.
Much of these involve authentication methods such as behavioral analytics, continuous authentication, FIDO fob and — invariably — some form of biometrics. There are two broad ways for an enterprise to deliver biometrics: internally, through a custom-built third-party or to piggyback, where the enterprise relies on whatever biometrics are on the phone in the employee’s/contractor’s pocket. (Piggybacking is part of a BYOD approach.)
Piggybacking is light-years more cost-effective, as there is essentially zero biometrics cost. But it also means that the enterprise is limited to whatever version the major phone makers offer. And given that both Apple and Google have leaned heavily on convenience more than security, it means enterprises must either create their own robust biometrics system or, candidly, see biometrics as mere convenience that doesn’t meaningfully authenticate users.
That is a big problem.
Why couldn’t Google have rolled four or even five categories — and then offered a truly strong security option for OEMs to select?
Another part of the problem is how biometrics are presented. Mathematically, facial recognition sounds quite secure, given the large number of factors it is evaluating. But the real test of authentication accuracy is how strict or leniently the system views those datapoints. And given that handset manufacturers are far more worried about blocking a legitimate user than they are about letting a thief gain access, they choose very lenient criteria. That means that the number of possible datapoints being evaluated becomes irrelevant.
Both Apple and Google have also pushed biometrics because they are supposedly more secure than a 6-digit PIN. That would be valid — except that both companies go right back to that PIN if the biometric authentication fails. In other words, if a thief wants to bypass biometrics, that thief merely needs to fail once and device access defaults back to a PIN. (One of the few security advantages of biometrics is that it can effectively thwart shoulder-surfing, which is the top method used to steal a PIN.)
As long as IT admins and security pros internalize that consumer biometrics are solely for convenience, no harm is done. But if they opt to rely on it for authentication, things are not going to end well. And Google’s new specs do very little to help.