AI is radically helping to transform the way we live and work, but alongside these advancements, new technologies are emerging that pose fresh challenges for consumers and financial institutions when it comes to fraud.
Among these emerging threats, one of the most significant is the rise of deepfake technology. Once confined to the realms of science fiction, deepfakes refer to a type of AI technology that generates highly convincing images, videos, and audio depicting fabricated events. The popularity of deepfakes has soared in recent years, with the number of incidents in the US more than doubling from 2022 to the first quarter of 2023.
In fact, a substantial 43% of individuals openly admit their inability to discern between a deepfake video and a genuine one. Despite the alarming increase in deepfakes, public awareness and understanding of this technology remain limited. This knowledge gap creates a worrisome vulnerability, leaving individuals susceptible to falling victim to fraudulent schemes facilitated by deepfakes.
The risk of deepfakes on the financial sector
The financial services sector faces significant risks due to the emergence of deepfake technology. This advanced AI-driven technique opens the door for various fraudulent activities, some of which have already resulted in substantial financial losses for companies.
One prominent threat involves fraudsters leveraging deepfakes to impersonate high-ranking individuals such as CEOs or bank employees. By convincingly mimicking the appearance and voice of these individuals, criminals can manipulate unsuspecting victims into divulging sensitive personal information, initiating unauthorised money transfers, or even assisting in money laundering schemes. This form of spoofing fraud becomes exceptionally persuasive and difficult to detect, amplifying the potential harm to both individuals and financial institutions.
Deepfake technology adds new layer of sophistication to fraud
The consequences of falling victim to such deepfake-based scams can be severe, with companies losing millions of dollars and individuals suffering significant financial and reputational damage. The rise of deepfake technology adds a new layer of sophistication to fraudulent activities in the financial services industry, underscoring the urgency for robust security measures, enhanced authentication protocols, and heightened awareness among employees and customers alike.
In 2020, the use of deep voice technology was so convincing that it led a Hong Kong bank manager to transfer $35m to a ring of criminals, believing he was communicating with a company director he knew well.
This example serves shows the devastating impact that high-tech swindles facilitated by AI can have. This incident has come to light at a time when concerns and warnings about the use of AI in cybercrime, particularly in the creation of deepfake images and voices, are on the rise.
It further highlights the alarming potential of deepfakes, where sophisticated technology is employed to fabricate realistic visuals and audio that deceive victims. These manipulated digital assets can be used for various fraudulent purposes, such as impersonating individuals, manipulating information, and orchestrating elaborate scams.
The different forms of deepfake attacks
As the prevalence of deepfake-based cybercrime grows, it underscores the urgent need for the financial sector to heighten their vigilance and adopt countermeasures to protect their customers and employees from falling victim to such manipulative tactics.
Deepfake attacks can take many different forms, each capable of creating significant losses. Known as ‘ghost fraud’, criminals can use personal data from a deceased person to access savings accounts as well as apply for loans or credit, or even make fraudulent claims for pensions, life insurance and other benefits.
Deepfake technology can also be used for new account fraud, where fake or stolen identities are used to open bank accounts. Recent data suggests that this type of fraud has led to losses of $3.4bn and can be particularly challenging for banks, many of which have progressed to onboarding customers remotely. Other challenges include synthetic identity fraud where criminals combine elements of real or fake identity and attach those to an individual that doesn’t exist – as much as $6bn has already been lost to this activity.
From Deepfake to DeepFraud Prevention
The increasing prevalence of deepfake fraud poses a pressing challenge for the financial sector. To safeguard their customers effectively, it is evident that financial services providers, payment gateways, and retail banks must adopt robust solutions and measures.
Recognising the potential harm caused by deepfake technology, financial institutions must prioritise the implementation of advanced security protocols and authentication mechanisms. This includes employing cutting-edge AI-driven technologies to detect and prevent deepfake-based fraudulent activities. By leveraging AI-powered solutions, financial service providers can enhance their ability to identify and mitigate potential risks in real-time, thereby bolstering customer protection.
Financial transactions often require real-time processing, making it crucial that banks swiftly verify the identity of customers. However, current verification processes are time-consuming, often causing delays and potentially impacting customer experience. What’s more, many of the new techniques used by fraudsters can bypass traditional security and detection methods.
How AI can be used to fightback against deepfake fraud
To protect consumer payments, financial institutions must leverage advanced technology and implement efficient and scalable solutions to combat deepfake identity theft, while maintaining operational efficiency. Next-generation solutions that offer a tech-first alternative that is customisable to their specific needs and able to adapt to evolving threats will be critical, helping banks protect customers and prevent them from unwittingly onboarding bad actors.
Although AI may have given rise to deepfake technology, it can also be used in the fight against deepfake fraud. Many organisations are using AI-driven solutions that have been programmed to detect image tampering and facial biometrics to verify and authenticate customers.
These identity verification solutions can check the face of a customer against other official documents, such as a driving licence or passport. Such solutions that offer biometrics, document verification combined with database checks offer a unique blend of multi-layer verification process that can quickly authenticate the customer before the transaction is made.
Effective solutions are those that can defeat attempts to overcome “liveness” checks designed to ensure biometric inputs are from a genuine person, with the best solutions enabling financial institutions to seamlessly do this within seconds, while simultaneously verifying the person’s identity, their document and address.
Governments, regulators need to up their game
But while robust cybersecurity is vital, it’s important that governments and regulatory bodies play their part too. Although some jurisdictions have implemented guidelines for deepfake-related crimes, the legal frameworks are often insufficient or lack clarity on enforcement. We need to see the introduction of robust regulations that address the challenges posed by deepfake identity to ensure we can better protect both consumers and financial institutions. Additionally, fostering collaboration and information sharing among industry stakeholders is vital.
Financial institutions, technology providers, and regulatory bodies need to work together to develop comprehensive strategies and best practices to combat deepfake fraud effectively. This collaborative approach will enable the financial sector to stay ahead of evolving threats and provide customers with a safer digital environment.
Ultimately, the adoption of proactive and innovative solutions, along with collaborative efforts, will empower financial services providers to effectively protect their customers from the risks associated with deepfake fraud.
Colum Lyons is CEO of ID-Pal