We are undergoing a paradigm shift with regards to how we interact with one another.
The pandemic has highlighted the value of remote and digital interactions in a time during which face-to-face interactions needed to be kept to a minimum.
What this has resulted in is a renewed focus on technologies that help us achieve human interactions in a manner that is secure and trusted, specifically with regards to instances where authentication is required.
The use cases are varied, there are many touchpoints where one needs to identify oneself to proceed with an interaction, these might include:
- Digital onboarding and eKYC procedures – FIs, telcos, insurance, etc.
- Financial services – mobile banking, preventing SIM swaps, etc.
- Travel – self-service kiosks, automatic gates, etc.
- Age verification – online gambling, tobacco and alcohol, eGaming, etc.
- Unlocking mobile devices to gain access to cardless payments, health records, etc.
- Civil identification – IDs, passports, drivers’ licenses, etc.
However, this brings rise to new challenges.
Presentation attacks and spoofing have become a new artform, with perpetrators coming up with increasingly creative ways in which to trick authentication systems into giving them access to someone else’s accounts, data, and more.
Hence the rise of smarter tech using the combined power of AI, machine learning, and neural networks to safeguard biometric authentication systems against these presentation attacks.
Authentication systems that rely on face biometric technology must possess the ability to automatically detect presentation attacks.
Also, because the concept of remote capture is becoming more mainstream, spurred on by the pandemic, it is crucial that it also features liveness detection as an integral component because we expect authentication to occur without the supervision of a trusted human, e.g., a bank teller, lawyer, etc.
Liveness detection is the ability of a system to detect whether the biometrics it is presented with is real – from a live person present at the point of capture, or fake – from a spoof artifact.
Whereas using biometric matching for authentication alone can accurately answer the question, ‘Is this the right person?’ – it cannot answer the question, ‘Is this a live person?’.
By incorporating a set of technical features such as AI algorithms that analyse data comprising imagery, sound, lighting, and movement at the point of capture, liveness checks can counter possible fake biometrics presented to a deceive or bypass the authentication process.
Only then can the process continue, and the biometrics be matched by mapping and measuring the features of an existing user – like the distance between their eyes or length of the jawline – and compare it to a biometric template to verify their identity.
There are also several methods to detecting liveness, but in general, these can typically be classified as an active or a passive approach.
Active liveness detection requires users to participate in the liveness check by responding to ‘challenges’.
Examples of this participation includes nodding or turning one’s head from side to side, blinking, following a dot on a screen, smiling, speaking a series of words or numbers, leaning into the camera, or recording a short video.
Passive liveness detection requires minimal interaction by the user.
The liveness detection occurs when the user takes a selfie.
Various techniques are possible for passive liveness, ranging from analysing a selfie image to capturing a video, to projecting different lights on the subject.
Which approach is best?
Lowering customer effort is a top priority for most businesses.
Therefore, passive liveness detection is the preferable method.
Active liveness solutions may cause unnecessary friction, and many ID&V operators see increasing abandonment rates, with some reporting as high as 50%, particularly in emerging markets.
The passive approach closes security gaps without adding friction back into the authentication process.
It does not require user education on the process, and it prevents a fraudster from gaining too much insight on how the system works and how to dupe it by knowing where the most resistance lies.
Sybrin’s Liveness Detection is available as active, passive, or a combined approach, depending on the use case or business requirements.
The SDK conforms to the ISO/IEC 30107-3 standards having been successfully tested recently by a third party on level A and B attacks from 10 Presentation Attack Instruments (PAIs), including paper masks, reconstructed faces on busts, videos, live persons, and more.
To find out more about Sybrin’s Liveness Detection and how it can improve your authentication process, visit the product page, or contact us to see what other automated solutions Sybrin can offer your business.