Protecting Biometric Identity Verification from Deepfakes and Spoofing Attacks
Biometric identity verification – using unique traits like facial features, fingerprints, or voice – has become a cornerstone of secure digital onboarding and authentication. However, recent advances in AI and readily available tools have enabled sophisticated spoofing attacks that attempt to fool biometric systems. In 2024, for example, deepfake techniques (AI-generated synthetic images or video) grew into a major global threat, accounting for 40% of all biometric fraud cases . Fraudsters now leverage everything from realistic silicone masks to injected video feeds in order to impersonate others and bypass identity checks. This article explores the evolution of these threats – including deepfakes, camera injection, masks, and screen replay attacks – and examines how modern biometric verification software can counter them. We will also look at real-world incidents that highlight the risks, and describe proven techniques (such as liveness detection) that help ensure the person on the other end of a camera is actually who they claim to be.
Common Spoofing Techniques and Threats
Biometric systems historically had to guard against simple “presentation attacks” (e.g. someone holding up a photo). Today, attackers have become far more inventive. Key methods used to deceive identity verification platforms include:
Deepfake Videos and AI-Synthesized Faces: Attackers can use AI (deepfake technology) to generate hyper-realistic video or images of a target person, effectively creating a digital mask. In a deepfake presentation attack, a model can dynamically manipulate a face – altering expressions or even simulating blinking – in real time . High-quality deepfake videos can be so convincing that even humans struggle to tell the difference . This method is resource-intensive but increasingly accessible, raising alarm among security experts. Entrust’s 2025 Identity Fraud Report noted that deepfake usage in identity attacks has surged, with such AI-driven spoofs now occurring every few minutes worldwide . Deepfakes have been used to fraudulently open bank accounts, take over online profiles, and even to impersonate job candidates in remote interviews, all by tricking facial recognition systems with an AI-generated face.
Virtual Camera Injection Attacks: Rather than presenting something to a camera, advanced attackers directly hijack the video feed that a biometric system receives. In a camera injection attack, malicious software intercepts the live camera request and feeds the system a fake video or image stream . For instance, an attacker might route a pre-recorded selfie video or an AI-altered video into the camera input, while the identity verification software believes it is seeing a live feed . Because this attack exploits the trusted camera channel, it can be especially hard to detect without specialized defenses . Such injection attacks have been trending upward: security analysts observed a 9x increase in biometric injection incidents in 2024 compared to the previous year , as fraudsters share tools to perform “virtual webcam” spoofing. Without countermeasures, camera injection allows attackers to bypass even sophisticated facial recognition and liveness checks by feeding perfectly timed fake imagery into the system.
Screen Replay (Photo/Video Display) Attacks: This is a modern twist on old presentation attacks, where the perpetrator uses a device screen to display a victim’s photo or video and presents it as “live” to the biometric scanner. In a simple photo replay, a high-resolution image of the legitimate user’s face is shown on a phone or tablet and held up to the camera . More boldly, in a video replay attack, a pre-recorded video of the real user (with natural movements like blinking) is played back on a screen to impersonate a live presence . These replay attacks are surprisingly common and effective against systems without robust liveness detection – one industry report calls video replay “one of the most widely used” methods to fool facial verification . Because the video shows a real person’s motion, basic face recognition may accept it if it cannot distinguish a screen-played video from a live camera feed. This category also includes pointing a webcam at another screen (or even an ID photo on a monitor) – essentially capturing a recording of a recording, which can trick naive systems.
Masks and Physical Spoofing: Not all attacks are digital; some fraudsters resort to literal masks and 3D replicas to impersonate someone else. Hyper-realistic silicone masks can imitate a person’s face with startling detail – including skin texture and hair – and have been used to fool both human inspectors and face recognition cameras . In fact, silicone masks have provided a viable route to identity fraud for years, and over the last decade more than 40 known criminal cases involved perpetrators using such disguises . Attackers have obtained custom-made masks (which can cost $400–$3500) based only on a few photos of the target . There are documented incidents of people wearing latex or silicone masks to bypass border controls, banking KYC checks, and other ID processes. For example, in 2010 a young man boarded a flight from Hong Kong to Canada disguised as an elderly Caucasian man with a silicone mask – an audacious stunt that was only discovered mid-flight . In another case, a white bank robber in Ohio wore a life-like dark-skinned mask; police initially arrested an innocent Black man who resembled the mask before the truth was uncovered . Modern biometric systems also face 3D mask attacks (using molded masks or even 3D-printed heads) which provide depth and realism. Attackers can now 3D-print facial models with realistic paint, or use partial masks and makeup to fool face scanners . Because masks present the actual 3D shape of a face (albeit a fake one), they can defeat systems that check for depth via stereoscopic cameras. The use of such disguises is a serious concern – Europol and other agencies have formally warned that even advanced biometric checks can be bypassed by criminals using masks, prosthetics, or deepfake images .
Why These Attacks Matter: All the above methods aim to exploit the verification process by presenting false biometric evidence – whether it’s a copied image, a pre-recorded video, an AI-generated face, or a physical mask – in place of a live, authentic user. The end goal is typically to commit fraud: create accounts under a stolen identity, pass a KYC (Know Your Customer) check under false pretenses, or gain unauthorized access to someone else’s account. As biometric security becomes more prevalent globally, fraudsters have correspondingly adapted their techniques to “beat the system.” Traditional one-factor authentication (like passwords) can be reset if compromised, but biometrics are immutable – if someone’s face or fingerprint data is stolen or spoofed, it’s not something they can change. This makes preventing these attacks absolutely critical for organizations that rely on automated identity proofing.
Real-World Incidents and Forensic Cases
Spoofing attacks are not just theoretical possibilities – many real incidents illustrate the scope of the threat. Some notable cases from around the world include:
High-Stakes Mask Impersonations: Criminals have repeatedly used ultra-realistic masks to impersonate others. The 2010 airplane stowaway case mentioned above is one example, but there are many more. In France in 2019, con artists Gilbert Chikli and Anthony Lasarevitsch donned a silicone mask of a French government minister’s face and, via video calls, impersonated him to wealthy targets, scamming at least €55 million in a fake hostage rescue fund scheme. In Brazil, a convicted drug trafficker attempted a prison escape by wearing a mask and wig to pose as his teenage daughter during a visit. And in the United States, bank robbers have worn “Hollywood quality” masks to change race or age, successfully fooling witnesses and even leading to wrongful arrests of look-alike individuals. A 2024 scientific study on hyper-realistic face masks noted that such disguises have been used in over 40 criminal acts in the past decade, ranging from heists to fugitives evading capture . These incidents underscore that masks can defeat not only human perception but also undermine face-recognition-based security if no countermeasures are in place.
Deepfake and Digital ID Fraud: Thus far, documented frauds using deepfaked videos have been relatively few, but they are growing rapidly. One early red flag came in mid-2022 when the FBI issued a public warning that deepfakes and stolen IDs were being used to apply for remote tech jobs – with applicants superimposing someone else’s face (or a wholly synthetic face) over their own during video interviews . Around the same time, a European bank’s identity verification system was reportedly tricked by a deepfake video in an experiment, prompting regulators to demand stronger safeguards. By 2024, deepfakes had become such a concern that Entrust and Onfido (major ID verification providers) reported AI-based identity attacks occurring “every five minutes” on average worldwide. These deepfake attempts targeted things like selfie verification processes and liveness checks. Entrust’s data also showed that generative AI tools were fueling a 244% spike in forged identity documents, and that deepfakes made up nearly half of all biometric fraud attempts. In short, while a few years ago deepfake fraud was mostly theoretical, we are now seeing a wave of real cases where criminals use AI face-swaps or synthetic faces to try to fool onboarding systems – for instance, to create bank accounts under fabricated identities for money laundering.
Camera Feed Injections in the Wild: The concept of camera injection moved from hacker forums into actual fraud operations recently. Cybersecurity firms have identified malware (like mobile banking trojans) that can perform virtual camera injections – effectively feeding a stolen photo or deepfake video into a banking app’s selfie check. One such fraud ring, exposed in late 2024, used an iPhone malware dubbed “GoldDrake” to bypass selfie verification by injecting a synthetic face video directly into the app. In 2025, digital identity vendor iProov reported that injection attacks on video-based ID verification exploded by 9× in 2024 compared to 2023, and virtual camera exploits jumped 28× in that time . These findings indicate that organised fraudsters are actively exploiting software vulnerabilities to skip the “live camera” requirement. Discussions on underground forums and even open communities have shared tools and guides for such exploits. For example, some users probing the verification process of a popular AI service openly recommended camera injection tricks as a workaround . The implication is clear: if an identity verification platform does not defend against feed injection, attackers will capitalize on that gap . Organizations have already lost money to accounts opened with forged credentials and injected selfies, though specific cases often remain confidential.
Testing Security – Researchers Expose Weaknesses: Several white-hat demonstrations have shown how easily some systems can be fooled, prompting industry improvements. In Germany, the Chaos Computer Club (CCC) – a renowned hacker collective – published a report in 2022 detailing how they bypassed six different video-based identification services used in banking. The CCC testers used simple techniques like holding up high-quality printouts of ID documents and using a video screen to simulate a live person (even placing fake hands on the ID for realism) . All six providers were tricked, despite supposedly having liveness checks in place. This caused an uproar in Germany’s financial sector, because regulators (BaFin) had permitted video chat or recorded-video identification for remote customer onboarding. The incident accelerated moves toward more secure methods (like reading the RFID chip in ID cards) in that country. Similarly, academic researchers have been working on detecting face morphs in passport photos and spotting deepfake artefacts in videos, racing to stay ahead of the fraud techniques. Each time a vulnerability is exposed – whether by criminals or by ethical hackers – it serves as a lesson for the identity verification industry to tighten controls.
These cases collectively highlight that biometric verification is under siege from multiple angles. The threat is global in scope: financial institutions in Europe, tech companies in the US, telecom providers in Turkey – all have seen attempts at biometric spoofing. Law enforcement and cybersecurity agencies are paying close attention. Europol’s 2025 cybercrime report explicitly warned that “biometric systems such as fingerprints and facial recognition are being bypassed by criminals” through new methods. In response, providers of identity verification solutions are evolving their defences, as the next section will describe.
How Biometric Verification Stops Deepfakes and Spoofs
To maintain trust, modern identity verification software employs a multi-layered defence strategy against deepfakes, masks, and other spoofing attacks. A key concept in this domain is Presentation Attack Detection (PAD) – a set of techniques to distinguish between a live genuine user and a fake representation. Below, we outline the leading approaches and technologies that make biometric verification far more resilient today than in years past:
1. Liveness Detection (Active and Passive): Liveness detection is the cornerstone of PAD. Its goal is to verify that a biometric sample (like a face image) is being captured from a real, live person and not a recording or statue. There are two main types:
Active Liveness Detection: The system prompts the user to perform certain actions during capture – for example, turn their head to the left, blink twice, or read a random phrase. The user’s response (or lack thereof) helps confirm there’s a live human. A static photo or video replay will generally fail to comply with unpredictable prompts. Active methods can even involve interactive challenges like asking the user to follow a moving dot on the screen with their eyes. Because it forces real-time interaction, active liveness is highly reliable: a pre-recorded video can’t magically follow instructions it has never heard. However, it can be seen as less convenient or somewhat intrusive for the user, and clever deepfake puppetry might eventually mimic simple actions.
Passive Liveness Detection: No user action is required beyond looking at the camera; instead, the software analyses the face data itself for signs of vitality or, conversely, signs of fraud. Passive liveness algorithms look at cues like the texture of skin, lighting and reflection, 3D depth, micro-expressions, and more. For instance, real skin has sub-surface scattering of light and a certain warmth – a flat printed photo or a screen display does not. Tiny involuntary facial movements (like subtle eye muscle contractions) can indicate liveliness. Passive methods also detect artefacts: if an image has looping video noise, a fixed blink rate, or signs of digital blending (as many deepfakes do), the system flags it. Modern passive liveness uses AI models trained on huge datasets of real vs. fake samples. This has proven quite effective: in independent lab evaluations, top solutions have achieved near 0% error rates in detecting spoof attempts . The advantage of passive liveness is a smoother user experience (no explicit tasks for the user) while still catching fakes. Many providers actually combine both approaches – e.g. an easy challenge plus behind-the-scenes analysis – to cover all bases.
2. Multi-Factor and Multi-Modal Verification: A robust system doesn’t rely on just one input. Cross-checking multiple identity factors greatly increases security. For remote identity proofing, this often means verifying both a photo ID document and a live selfie, and comparing the two. Advanced solutions first authenticate the ID document (checking security features, holograms, MRZ code, etc.) and extract the official photo. Then they match the live face to the ID photo to ensure the person is the document owner. During this process, document liveness detection can be employed – e.g. prompting the user to tilt their ID card on camera to see holographic changes, or using video to confirm the document is physical and not just a scanned image. If the ID has a biometric chip (as modern passports and national IDs do), the system can read the chip via NFC on a smartphone and pull the high-quality image stored inside. This chip reading method offers “100% certainty” of the document’s authenticity and provides a trustworthy reference photo for face matching. By combining document verification with facial biometric verification, the platform makes it far more difficult for an impostor to succeed – they would need both a convincing fake face and a perfectly forged document simultaneously. Similarly, some systems use multi-modal biometrics: for instance, requiring a voice sample along with a face video, or a fingerprint in addition to the selfie. A deepfake might fool a camera, but would it also mimic the victim’s voice in real time? Requiring two different biometric modalities exponentially raises the attacker’s burden. Many banks also pair biometrics with device-based factors (like a one-time SMS code to the phone) so that even if one layer is tricked, another stands in the way.
3. Anti-Deepfake and Injection Countermeasures: Recognizing the growing menace of AI-generated forgeries, vendors have developed specialised detectors for deepfakes and feed injection. These tools use forensic analysis of images and video streams to pick up subtle inconsistencies. For example, some deepfake detection algorithms focus on eye reflections or pulse (heartbeats can sometimes be detected in a video of a real face as slight color changes). Others look for pixel-level artifacts or odd blur patterns that generative models leave behind. A new line of defence is emerging specifically against camera injection: software can monitor the device environment and camera data flow for telltale signs of tampering. One product introduced in 2025 detects “unique digital signatures” of virtual cameras and deepfake feeds – essentially fingerprints in the video stream’s data that indicate it’s not coming straight from a genuine camera sensor. By checking properties like metadata, frame timing, and device API calls, it can often flag an injected feed before the fraudulent content even reaches face recognition algorithms. Additionally, systems are now integrating device integrity checks, confirming that the user’s device isn’t jailbroken or running known camera-hijack processes. If anomalies are found (e.g. an Android phone with root access active during verification), the system might prompt extra verification or abort the process. This “camera forensics” approach is crucial because injection attacks bypass traditional liveness at the sensor level, so the defence has to operate at the software and metadata level.
4. Mask and 3D Spoof Detection: To combat masks and dolls, systems have turned to both improved algorithms and hardware aids. On the software side, face liveness AI can detect texture anomalies – for instance, skin in a silicone mask might lack natural micro-pore detail or subtle facial movements. As one study found, even the best masks tend to have limited facial expressiveness and symmetrical features that differ from real faces . Algorithms now analyze regions like eyes, mouth, and nose for those signs (e.g., is there normal eye blinking and lip movement? Does the skin around the eyes crinkle naturally when blinking or smiling?). Some vendors also employ thermal imaging or IR depth sensors to catch masks: a live human face emits heat and has a certain 3D shape with variable depth, whereas a mask may have a uniform temperature and often a rigid structure. Apple’s FaceID, for example, uses an IR dot projector and camera to build a depth map of the face – a flat photo or 2D mask won’t match the expected depth profile. Likewise, liveness systems can shine invisible infrared light and look for the reflective properties of real skin versus synthetic material. Another approach is to incorporate user behaviour analysis: if someone is wearing a full-face mask, their head movements or the way light glints off their face might differ from normal. By combining these subtle indicators, good biometric systems today can detect most mask attacks. It’s an ongoing arms race – as masks improve, detectors must too – but the technology is meeting the challenge. In fact, in controlled testing, advanced passive liveness AI has successfully caught 100% of mask-based spoof attempts during certification trials.
5. Compliance with Security Standards: The field of biometric PAD has matured to the point where international standards define how to evaluate a system’s effectiveness. Chief among these is ISO/IEC 30107-3, which sets strict performance criteria for liveness detection against various attack types . Independent labs conduct certification tests where they throw a barrage of spoofs (photos, videos, masks, etc.) at a biometric product and measure its Attack Presentation Classification Error Rate (APCER) and Bona fide Presentation Classification Error Rate (BPCER). Top-tier vendors now routinely pass these certifications, demonstrating extremely low false accept rates for spoofs. For example, iBeta – an accredited lab – might certify that a particular face verification SDK has an APCER of <1% for Level 1 attacks (basic printouts) and maybe a few percent for Level 2 (more sophisticated masks), which is considered good. When choosing an identity verification provider, businesses often look for those who have met ISO 30107-3 compliance, as it indicates the solution has been “rigorously tested by an authorized organization” against real-world attack simulations. Additionally, many vendors are part of ongoing industry alliances and shared databases to update each other on emerging attack patterns (for instance, deepfake video datasets to train new detectors). All this means that the best systems don’t stand still – they are continuously improving their fraud detection in light of new threats.
6. Regulatory and Policy Measures: Finally, it’s worth noting that technology alone isn’t a silver bullet – policy plays a role. Regulators in various regions have begun mandating stronger identity verification practices to counter these risks. In the EU, upcoming eIDAS regulations and Anti-Money Laundering directives emphasize the need for secure remote identification, implicitly pushing providers to include liveness and anti-fraud checks. Some European regulators have explicitly required certain high-assurance methods: Germany’s BaFin, for instance, now allows NFC-chip reading of IDs for customer onboarding, considering it more secure than pure video verification after the aforementioned spoofing incidents . In Turkey, the government and financial authorities (BRSA and MASAK) rolled out guidelines in 2021–2023 for remote customer identification which require real-time video interaction and robust verification of liveness and document authenticity . These rules also oblige institutions to use secure, encrypted channels and to retain evidence of the identification session. By enforcing such standards, authorities aim to ensure that any organization offering remote onboarding has defenses against deepfakes, replays, and other attacks. Global industry groups are similarly advocating best practices – for example, the Biometrics Institute publishes guidelines on anti-spoofing, and NIST in the US runs ongoing evaluations of face recognition and liveness algorithms (with public rankings encouraging competition to improve). In summary, the world’s regulators and experts recognize the threat landscape and are increasingly demanding that identity verification processes include multi-factor, AI-enhanced fraud prevention measures. This external pressure, combined with innovation by solution providers, is steadily raising the bar for fraudsters hoping to beat biometric systems.
Case Study: Nest2Move’s Sodec Biometric Verification Solution
As an example of how the industry is responding, consider Sodec Technologies, a company at the forefront of biometric identity verification, and a technology partner of Nest2Move. Sodec’s platform is designed with a “zero-error” philosophy in identity and document verification. Founded in 2014, Sodec set out to provide extremely secure solutions for data collection, processing, and verification across various industries. Today, their biometric verification suite is used in telecommunications, finance, insurance, and other sectors to safely onboard customers remotely. A look at Sodec’s approach illustrates many of the principles discussed above in action:
Top-Tier Accuracy: Sodec has achieved global recognition in facial recognition performance, ranking in the top 10 on NIST’s facial recognition tests with an error rate of less than 1 in a million. This implies their face matching algorithms are highly accurate, minimizing false matches. Such precision is crucial because it ensures that adding anti-spoofing layers (which can sometimes reject even real users if too strict) does not unduly inconvenience genuine customers. Sodec can be both secure and user-friendly by virtue of its low error rates.
Passive Liveness with AI: One of Sodec’s flagship features is its passive liveness detection for facial verification. This system checks if a face is real without asking the user to do anything special . Under the hood, it uses AI models to catch signs of fake photos, videos, or masks – exactly the kinds of threats we’ve described. Sodec reports that in independent international evaluations, their liveness detection operated with 0% error against all fraud attempts , meaning it successfully identified every spoof sample in the test sets. The company emphasizes that where older active liveness methods (like blink detection) could be deceived, their advanced passive approach provides a more reliable solution . This is a strong claim that indicates Sodec’s system likely examines many subtle features of the image (from lighting to texture) to determine authenticity.
Anti-Spoof for Documents and Morphing: Beyond faces, Sodec applies liveness concepts to document checks. Their document liveness detection can tell if an identity document is real and physically present, versus a photo or scan of a document. For instance, it might detect if someone is showing a color photocopy or a screen image of an ID rather than an original – an important aspect since a fraudster might otherwise use a victim’s ID scan along with a deepfake face. Additionally, Sodec has integrated face morphing detection technology. Face morphing is a tactic where two different people’s facial features are digitally blended into a single photo (often to fool passport issuance, so that two people share one passport). Sodec’s system analyses the ID photo to spot if it has been manipulated or morphed. By catching these issues, Sodec addresses some niche but serious fraud schemes that many general providers might miss.
Comprehensive KYC and Global Coverage: Sodec’s solution is tailored for compliance with KYC (Know Your Customer) and AML regulations in multiple jurisdictions. They have built support for identity documents from over 200+ countries worldwide, reflecting a truly global coverage. This is critical for international businesses and also speaks to Sodec’s data and AI breadth (they likely trained their document verification on a vast variety of passports, ID cards, driver’s licenses, etc.). The platform is also scalable – Sodec has announced plans for a cloud-based verification service that can be accessed easily by clients around the world . To achieve this reach, Nest2Move and Sodec have established partnerships in Central Europe, Central Asia, Africa, and beyond. In the EU and Turkey, where regulatory standards are stringent, Sodec/Nest2Move have positioned their solution as compliant with local requirements (for example, GDPR for biometric data handling, and eIDAS high-assurance levels for identity proofing).
AI-Powered User Experience: While security is paramount, Sodec also leverages AI to keep the user experience smooth. They employ AI virtual assistants in the verification process that guide users through steps in their own language, across 30+ languages . This reduces errors (users are less likely to make mistakes like misaligning their ID on camera) and helps legit users complete onboarding quickly, even as the AI quietly performs fraud checks in the background. According to Sodec, many verification steps are now fully automated with no human intervention needed, yet maintain high reliability . This indicates a mature system where the AI is trusted to make pass/fail decisions thanks to the confidence gained from their extensive testing and low false positive rates.
In summary, Nest2Move’s Sodec solution exemplifies the state-of-the-art in defending against deepfakes, spoofing and other verification attacks. It combines world-class facial recognition, rigorous liveness/pad measures, document verification enhancements, and broad compliance to deliver a secure identity verification platform. Solutions like Sodec show that, even as fraudsters innovate, the industry is responding with equally innovative safeguards. The “arms race” between biometric security and attackers is ongoing, but with companies pushing toward “zero-error” performance and employing AI in clever ways, the balance can be tipped in favor of security.
Conclusion
Identity verification has entered a new era – one in which seeing is not necessarily believing. Deepfakes can conjure up realistic faces of people who aren’t present, and masks or digital avatars can let impostors pass as someone else. The good news is that biometric verification systems are rapidly evolving to meet these challenges. By incorporating advanced liveness detection, multi-factor checks, and AI-driven forensic analysis, the latest solutions make it exceedingly difficult for would-be fraudsters to beat the system. Certainly, no security measure is 100% foolproof and the cat-and-mouse dynamic will continue. Yet, the combination of technology innovation and regulatory pressure is raising the cost and complexity of mounting a successful spoofing attack. A fraudster now has to contend with being unmasked by texture analysis, flagged by device checks, or caught by subtle cues that they can’t easily control.
For organizations, the lesson is clear: robust biometric verification is possible and indeed essential in the face of modern threats. Relying on simple face matching without liveness, or outdated methods without AI enhancement, is inviting trouble, as several high-profile breaches and fraud cases have shown. On the other hand, deploying a comprehensive identity verification platform with proven anti-spoofing capabilities can virtually eliminate certain fraud vectors (such as basic photo or video replays) and dramatically reduce the success of advanced attacks (deepfakes, injections, masks). This allows businesses to confidently onboard customers remotely and comply with KYC/AML rules, without becoming an easy target for identity thieves.
In a world where “fake identities” can be manufactured with a few clicks, establishing real identity requires a multi-pronged strategy. Biometric verification, when done right, remains a powerful tool – arguably more secure than traditional methods – because of these evolving defenses. As we’ve explored, the industry is continually learning from incidents and investing in AI to stay ahead. By understanding the threat landscape and implementing the right safeguards, we can ensure that the only people getting through our identity checks are the genuine, live individuals they claim to be. Trust in digital identity, after all, hinges on our ability to keep proving the prover is real.
Sources
Ali Haydar Ünsal, “Sodec: Zero-Error Principle in Identity and Biometric Verification!” (Interview with Sodec’s co-founder), Fintechtime (English translation on LinkedIn, Nov. 14, 2024) – Discusses Sodec’s technology, including passive liveness detection and global verification platform plans.
Socure Blog – “How Injection Attacks Are Evolving: Why Fraud Fighters Need to Stay a Step Ahead” (May 27, 2025) – Explains camera injection attacks and the shift from deepfakes to simpler image manipulations, highlighting how fraudsters intercept live feeds with virtual cameras.
Abigail Opiah, “Deepfake attacks now occur every five minutes, Entrust report warns”, Biometric Update (Nov 19, 2024) – Reports findings from Entrust’s 2025 Identity Fraud Report (with Onfido), noting deepfakes accounted for 40% of biometric fraud and digital ID attacks surged with generative AI.
David J. Robertson et al., “The super-recogniser advantage extends to the detection of hyper-realistic face masks” (Applied Cognitive Psychology, 2024) – Academic study stating 40+ known crimes in the last decade involved silicone masks, emphasizing the threat of mask disguises in identity fraud.
Darren Guccione, “Silicon masks and biometric authentication – a threat to security?”, CybersecAsia (Aug 6, 2024) – Describes incidents of thieves using hyper-realistic masks (e.g. in China) and cites an IEEE study that facial recognition is far less effective against custom mask attacks.
Wikipedia: “List of crimes involving a silicone mask” – Compilation of notable criminal cases using masks. Examples include a 2010 impostor on an Air Canada flight disguised as an old man , a bank robber who used a mask to appear as another race , and a 2019 scam with culprits wearing a French minister’s mask via Skype.
Regula Forensics Blog – “Presentation Attacks: What Liveness Detection Systems Protect From Every Day” (2023) – Overview of presentation attack types and PAD methods. Defines replay attacks (“fraudsters simply play a pre-recorded video on a screen” ) and deepfake attacks (AI-generated videos that can mimic expressions, “sometimes hard even for humans to tell” ). Also explains active vs. passive liveness and importance of ISO 30107-3 compliance.
Chris Burt, “ROC introduces software to stop growing threat of biometric injection attacks”, Biometric Update (Apr 22, 2025) – Announces a new camera injection detection solution by ROC. Notes it detects deepfakes/virtual cameras via device integrity monitoring and cites iProov’s report of 9× increase in injection attacks in 2024 (and 28× for virtual camera exploits).
Independent.ie – “Hackers and criminals can bypass biometric systems on phones with artificial fingerprints and deepfake photos, Europol warns” (Apr 23, 2025) – Article by Adrian Weckler summarizing Europol’s warning that fingerprint and face recognition locks are being circumvented by criminals using tricks like fake fingerprints and deepfaked images.
ABC News, “White Man Used Lifelike Black Mask to Evade Arrest in Robberies” (Dec 1, 2010) – News report on the Conrad Zdzierak case in Ohio. Describes how a suspect wore a Hollywood-quality silicone mask of a Black man’s face, leading to a wrongful identification before he was caught (mask discovered with stolen money) .
Mark Tran, “Refugee sheds disguise mid-flight”, The Guardian (Nov 5, 2010) – Coverage of the Air Canada masked stowaway. Includes a photo of the young Chinese asylum-seeker and the elderly silicone mask he used as a disguise. (Illustrates the realism of the mask and how it fooled airline staff.)
FBI Internet Crime Complaint Center (IC3) Alert, “Deepfakes and Stolen PII Utilized to Apply for Remote Work Positions” (June 28, 2022) – FBI PSA warning that criminals were using deepfake videos in combination with stolen personal data to impersonate legitimate applicants in remote job interviews . Highlights an emerging avenue of identity fraud beyond traditional financial crimes.
how many humans does it take to ship a product? | Builders Economy | 4x Founder | ex-CTO building AI ecosystem in CEE
1moIt is something I struggle a lot as a public speaker. You can find easily my voice and face samples online. I know there are some solutions I use with fingerprinting from my FIDO (U2F), but my family is not tech savvy. Imagine me calling them and they need to always have to open a 3rd party app to verify my signature or something...