While debates around Artificial Intelligence (AI) focus on its potential impact on jobs and when it will finally deliver on promised productivity gains, one crucial aspect is being overlooked: the widespread adoption of these powerful tools by hackers, particularly in the high-value B2B payments ecosystem.
The evidence continues to mount: 41% of fraud attacks are reported to be AI-powered, and potentially costing companies millions each year. One striking data point highlights why this is so critical: most businesses are able to recoup less than 10% of stolen funds. At the same time, a lack of awareness and the sophisticated, AI-enhanced tactics used by cybercriminals mean that many companies underestimate the threat, leaving them exposed to attacks.
And it is indeed a serious threat to the general public and the B2B sector. The scale of fraud is difficult to quantify, as much goes undetected or unreported, but the evidence points to a sharp rise in cases and associated losses. The growth of online and digital transactions, in B2C and B2B contexts, coincides with this increase, amplifying the exposure to fraudulent activity. For example, B2B-based financial crime is estimated to account for nearly a third of global fraud costs, totalling roughly $1.6 trillion annually, while the total global cost of fraud approaches $5 trillion per year.
Fuelling this rise are new trends in tech-augmented fraud. Bad actors can now create convincing fake invoices, purchase orders, or payment instructions that closely resemble legitimate business communications. They can also deploy deepfake voice and videos of CEOs and other executives to trick buyers out of potentially millions of dollars.
In the US, there were more than 105,000 deepfake-related attacks last year, an attack every five minutes—a massive increase from 2023. Targets have included major global companies such as Ferrari, cloud-security firm Wiz, and advertising group WPP. An alarming case involved a UK-based engineering firm, Arup, where an employee, after a video meeting with AI-generated impersonations of several company executives, transferred $25 million to fraudsters.
Fraud-powered AI has grown so significantly that it’s become a career option. As The Wall Street Journal noted, Russian-language advertisements have appeared seeking hackers who specialise in breaking into and extorting businesses. One posting requested “a native English speaker” to assist with business correspondence and calls.
Pandora’s Box is open – how should we respond?
Traditional fraud techniques, including routine back‑office exploits that once required manual effort and specialist expertise, are now trivial to scale with AI. Activities such as hacking and phishing can be automated and effectively outsourced to systems that continuously probe for vulnerabilities until they find the weakest link. When you stop to consider it, there’s a striking irony: the digitalisation of B2B transactions has brought enormous benefits in terms of transparency and convenience, yet the same convenience has opened doors to cybercriminals, putting our systems, but also our customers and partners’ systems, at risk.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataBut Pandora’s Box has been opened, and we need to react thoughtfully— avoiding any panicked reactions and the temptation to ignore the problem. The good news is the Global 2000 is responding and there are practical steps companies can take to get ahead of AI-driven fraud. The reality is, as powerful as they are, core enterprise resource planning (ERP) systems were architected long before Russian darknet recruiters were even a concept, so it’s no surprise they often lack the advanced fraud-detection capabilities required to address these evolving threats.
The speed and scale of AI-enabled fraud means real-time monitoring and analysis are now essential elements of effective fraud prevention strategies. The CFOs, CPOs, and their teams need ways to continuously analyse transaction data, identify anomalies, and flag suspicious activities for immediate review.
AI brings fraud headaches, and a way to ease them
It’s important to recognise that the tools used to fight yesterday’s threats, like viruses and malware, are less effective here. Business identity theft is about impersonation, not taking systems offline through DNS attacks. Retailers and e-commerce merchants need a multifaceted approach, one focused on detecting suspicious activity as soon as it appears and intervening immediately.
However, any techniques leaders implement must protect their organisations without introducing unnecessary friction for legitimate transactions. The good news is companies can ‘set a thief, to catch a thief’, leveraging the power and speed of AI at the centre of the countermeasures—a strategy that, fortunately, is already in play.
Many B2B payment leaders are now using AI to verify the authenticity of invoices, purchase orders, and payment instructions before they get processed. At TreviPay, we use AI in a variety of use cases to detect bad actors as quickly as possible. AI-driven systems can analyse vast amounts of data, flagging anomalies that may indicate fraud before any damage is done.
A composable approach, outsourcing specific functions to specialised strategic partners, can also strengthen security. Collaborating on real-time decision-making, credit risk assessment, and identity verification (e.g., card and address checks) helps prevent AI fraud. And adding identity controls, such as two-factor authentication, strict sign-on protocols, and continuous 24/7 monitoring further blocks unauthorised access and enables rapid response to threats.
Just as Hope emerged last from Pandora’s box, AI offers optimism amid its challenges. By combining its power with fraud-prevention expertise and robust identity verification, it can drive progress and play a vital role in combating fraud.

Brandon Spear is CEO of TreviPay
