The current interest in generative artificial intelligence causes some to conclude we may see a new wave of AI-enabled fraud. However, the payments industry shouldn’t overestimate the abilities of large language models (LLM). Or underestimate the power of existing anti-fraud systems, says chargeback technology platform, Chargebacks911.

AI has played a vital part in anti-fraud and anti-chargeback operations for over a decade. In other words, long before the current wave of enthusiasm for AI (specifically, for large language models like ChatGPT). It has been used for everything from checking the validity of individual chargeback claims to consolidating compelling evidence to overturn fraudulent chargebacks. Given the sheer quantity of transaction disputes, it would be impossible for human operators to examine and contest every chargeback, or even a small number. AI is the leading tool for businesses to avoid losing money to chargeback abuse. But it is a solution that may have unexpected consequences if used by malicious customers engaging in friendly fraud.

“The same technology that allows companies to prevent chargebacks could also be turned to automate creating false chargebacks that are far better ‘quality’ than those that can be produced by amateurs,” said Monica Eaton, CEO of Chargebacks911.

“Doing so would allow bad actors to work at far larger scales, with chargeback claims that have a higher chance of not being detected.”

According to Eaton, this would be a disaster for online merchants. It begs the question: ”What are the capabilities of AI-powered fraud and what can companies do to blunt its impact?”

AI and large language models

The first and perhaps most important aspect of the current craze for artificial intelligence is understanding the difference between ‘true’ AI and large language models.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

A large language model works by collecting and annotating vast amounts of written information in order to find patterns and create realistic messaging and responses. While it can recognise requests and provide its own responses, LLMs have a serious problem with ‘hallucinations,’ in which basic mistakes are made due to a host of factors like incomplete or noisy training data, or a misunderstanding of the context. This prevents them from ever attaining ‘artificial general intelligence’ (AGI)—the state of being truly indistinguishable from human intellect—and means that they are unsuitable for many commercial applications, especially where current, up-to-date data is needed. LLMs can’t understand human requests, but they can convincingly match their output to our input.

According to Eaton: “The machine-learning algorithms used by Chargebacks911 and other anti-fraud companies aren’t designed to mimic humans. They are built to extract certain information from a dataset. They do not make ‘decisions’ with this information, but rather follow decision-trees based on parameters set by its overseers. These systems can be very sophisticated, up to the point of being able to improve themselves. But they are not ‘intelligence’ in any real sense, and perhaps this is for the best.”

Eaton added: “For this reason, LLMs have limited applications in fighting chargebacks. Aside from acting as a responsive customer service tool, LLMs being able to generate large amounts of relatively convincing, but often inaccurate, responses isn’t going to move the needle on the epidemic of chargeback fraud.”

Could LLMs commit chargeback fraud?

“The short answer is absolutely, and they likely already are,” said Eaton. “While much of chargeback fraud is carried out by individuals just trying to make a little money, there is a major risk of chargeback fraud being committed on a mass scale by professional groups. For these groups, quantity is important. By carrying out hundreds of fraudulent chargeback claims a day, they can make incredible amounts of money by taking advantage of companies with a lackadaisical response to transaction disputes.”

Just as it is impossible for human operators to deal with every chargeback attempt, it is also very difficult for fraudsters to deal with the ‘paperwork’ from making dozens of chargeback attempts each day. Not only do the chargebacks themselves have to be created, they may have to answer enquiries from card schemes very accurately or they will be caught. These responses are something that LLMs may be able to assist fraudsters in crafting.

Combatting AI-enabled chargeback fraud

“It seems entirely possible that LLMs can be used to create large amounts of relatively convincing written content to support fraud. Is this the death knell for our efforts to fight chargeback fraud? Now that fraudsters are using the latest generation of AI, are anti-fraud companies outgunned?

“In a sense, no. Creating written content is not a skill that is in very high demand when it comes to effectively carrying out online fraud. Anti-fraud systems used by not just Chargebacks911, but every major payment company look for much more than written content. They analyse potentially thousands of signals—no matter how small and seemingly insignificant—to build a complete threat assessment of each transaction and chargeback request. Even if any written elements submitted by fraudsters are perfectly defined, there are still more than enough chokepoints where a fraudulent information is detected. Our track record shows that our constantly-updated systems are more than capable of alerting merchants to AI-enabled fraud.”