A Hong Kong bank recently fell victim to an impersonation scam in which a bank employee was tricked into transferring $25.6 million to thieves after a video call with the bank CFO and other colleagues. But none of them were real people — all were deepfakes created with the help of artificial intelligence.
This incident illustrates how cybercriminals can use deepfakes to trick humans and commit fraud. It also raises concerns about the threats that deepfakes pose to biometric authentication systems.
The use of biometric markers to authenticate identities and access digital systems has exploded in the last decade and is expected to grow by more than 20% annually through 2030. Yet, like every advance in cybersecurity, the bad guys are not far behind.
Anything that can be digitally sampled can be deepfaked — an image, video, audio, or even text to mimic the sender’s style and syntax. Equipped with any one of a half dozen widely available tools and a training dataset like YouTube videos, even an amateur can produce convincing deepfakes.
Deepfake attacks on authentication come in two varieties, known as presentation and injection attacks.
Presentation attacks involve presenting a fake image, rendering, or video to a camera or sensor for authentication. Some examples include:
Print attacks
- 2D image
- 2D paper mask with eyes cut out
- Photo displayed on a smartphone
- 3D layered mask
- Replay attack of a captured video of the legitimate user
Deepfake attacks
- Face swapping
- Lip syncing
- Voice cloning
- Gesture/expression transfer
- Text-to-speech
Injection attacks involve manipulating the data stream or communication channel between the camera or scanner and the authentication system, similar to well-known man-in-the-middle (MITM) attacks.
Using automated software intended for application testing, a cybercriminal with access to an open device can inject a passing fingerprint or face ID into the authentication process, bypassing security measures and gaining unauthorized access to online services. Examples include:
- Uploading synthetic media
- Streaming media through a virtual device (e.g., cameras)
- Manipulating data between a web browser and server (i.e., man in the middle)
Defending Against Deepfakes
Several countermeasures offer protection against these attacks, often centered on establishing if the biometric marker comes from a real, live person.
Liveness testing techniques include analyzing facial movements or verifying 3D depth information to confirm a facial match, examining the movement and texture of the iris (optical), sensing electronic impulses (capacitive), and verifying a fingerprint below the skin surface (ultrasonic).
This approach is the first line of defense against most kinds of deepfakes, but it can affect the user experience, as it requires participation from the user. There are two types of liveness checks:
- Passive protection runs in the background without requiring users’ input to verify their identity. It may not create friction but offers less protection.
- Active methods, which require users to perform an action in real time, such as smiling or speaking to attest the user is live, offer more security while modifying the user experience.
In response to these new threats, organizations must prioritize which assets require the higher level of security involved in active liveness testing and when it is not required. Many regulatory and compliance standards today require liveness detection, and many more may in the future, as more incidents such as the Hong Kong bank fraud come to light.
Best Practices Against Deepfakes
A multi-layered approach is necessary to combat deepfakes effectively, incorporating both active and passive liveness checks. Active liveness requires the user to perform randomized expressions, while passive liveness operates without the user’s direct involvement, ensuring robust verification.
In addition, true-depth camera functionality is needed to prevent presentation attacks and protect against device manipulation used in injection attacks. Finally, organizations should consider the following best practices to safeguard against deepfakes:
-
Anti-Spoofing Algorithms: Algorithms that detect and differentiate between genuine biometric data and spoofed data can catch fakes and authenticate the identity. They can analyze factors like texture, temperature, color, movement, and data injections to determine the authenticity of a biometric marker. For example, Intel’s FakeCatcher looks for subtle changes in the pixels of a video that show changes in blood flow to the face to determine if a video is real or fake.
- Data Encryption: Ensure that biometric data is encrypted during transmission and storage to prevent unauthorized access. Strict access controls and encryption protocols can head off man-in-the-middle and protocol injections that could compromise the validity of an identity.
- Adaptive Authentication: Use additional signals to verify user identity based on factors such as networks, devices, applications, and context to appropriately present authentication or re-authentication methods based on the risk level of a request or transaction.
-
Multi-Layered Defense: Relying on static or stream analysis of videos/photos to verify a user’s identity can result in bad actors circumventing current defense mechanisms. By augmenting high-risk transactions (e.g., cash wire transfers) with a verified, digitally signed credential, sensitive operations can be protected with a reusable digital identity. With this approach, video calls could be supplemented with a green checkmark that states, “This person has been independently verified.”
Strengthening Identity Management Systems
It’s important to remember that simply replacing passwords with biometric authentication is not a foolproof defense against identity attacks unless it’s part of a comprehensive identity and access management strategy that addresses transactional risk, fraud prevention, and spoofing attacks.
To effectively counteract the sophisticated threats posed by deepfake technologies, organizations must enhance their identity and access management systems with the latest advancements in detection and encryption technologies. This proactive approach will not only reinforce the security of biometric systems but also advance the overall resilience of digital infrastructures against emerging cyberthreats.
Prioritizing these strategies will be essential in protecting against identity theft and ensuring the long-term reliability of biometric authentication.