top of page

WEF warn of deepfakes used by criminals to bypass Face-id authentication

  • Jan 13
  • 3 min read

The World Economic Forum (WEF) has raised concerns that the rapid development of deepfake technology is threatening trust in online identity systems. Deepfakes are highly convincing fake images or videos created using artificial intelligence. Criminals are now using these tools to impersonate real people and bypass identity checks, which poses serious risks to financial institutions, cryptocurrency platforms, and any organization that relies on digital trust.


Identity checks, often called “Know Your Customer” (KYC) processes, are designed to confirm who you are when you open an account or make transactions online. These checks typically involve scanning official documents such as passports or driving licences and comparing them with a live photo or video of the person. However, criminals are exploiting face-swapping technology and camera injection techniques to trick these systems. In practice, this means they can present fake or stolen identity documents alongside a manipulated live video feed that looks convincing enough to pass verification.


Researchers working on the WEF’s Cybercrime Atlas report examined 17 face-swapping tools and eight camera injection tools to understand how effective they are at defeating KYC protections. While most of these tools were originally intended for entertainment or creative purposes, some have capabilities that allow attackers to bypass traditional identity checks. The greatest risk occurs when these tools can deliver real-time, high-quality face swaps directly into the verification process. Even mid-range tools, when combined with camera injection techniques, can fool certain biometric systems under the right conditions.

Although these attacks are becoming more sophisticated, they are not perfect. Many still show signs of manipulation, such as mismatched lighting or timing issues, which can be detected by advanced security systems. This gives organizations an opportunity to strengthen their defences by focusing on these weaknesses.


The report warns that the problem is likely to grow. AI tools are becoming easier to access and cheaper to use, lowering the barrier for criminals. Financial services and cryptocurrency platforms will remain prime targets, but other sectors that depend on identity verification could also be affected. Face-swapping technology is improving rapidly, making fake videos look more realistic and harder to detect. At the same time, fragmented regulations across countries make it difficult to create consistent protections, although global cooperation could improve resilience in the future.


To address these risks, organizations need to adapt quickly. The UK’s National Cyber Security Centre (NCSC) advises businesses to strengthen identity checks by using multiple layers of verification rather than relying solely on facial recognition or document scans. Adding “liveness” tests, which confirm that the person is physically present and not using a static image or video, is essential. Companies should also invest in detection tools that can spot signs of deepfake manipulation and keep these systems updated as criminal techniques evolve. Staff training is equally important—employees should understand the risks and know how to respond to suspicious activity. Finally, organizations should share threat intelligence within their sector and plan for resilience, assuming that attackers will continue to improve their methods.


The WEF report concludes that defences must evolve as quickly as the technology itself. Detection systems should not only recognize known patterns but also anticipate future ones through continuous learning and collaboration. As criminals increasingly use open-source AI models and low-cost hardware, the ability to carry out real-time identity spoofing will become even easier, making agile and adaptive security measures critical.

bottom of page