Artificial intelligence is transforming financial fraud at an alarming pace, making scams more sophisticated and harder to detect. While fraud attempts have surged by 80% in the past three years, just 22% of firms have AI-powered defences in place. Stuart Wilkie, Head of Commercial Finance at Anglo Scottish Finance, explores the evolving threat landscape and how institutions — and individuals — can fight back.
Tackling financial fraud has become more difficult than ever in recent years, thanks to the increasing prevalence of AI (artificial intelligence). A recent report from Signicat has highlighted the prevalence of AI in the murky world of financial fraud, suggesting that AI now accounts for 42% of all financial fraud attempts – while just 22% of firms have AI defences in place. This disconnect is worrying, but sadly, it’s nothing new.
Both before and after the introduction of ChatGPT, the world’s most popular AI chatbot, in late 2022, the use of AI in financial fraud tactics has been on the increase. A 2022 report from Cifas found an 84% increase in the number of cases where AI was used to try and attack banks’ security systems.
AI has made it easier for grifters to carry out their fraudulent activity, which has in turn resulted in an increase in overall fraud incidence. Signicat’s report also uncovered that the volume of fraud attempts is increasing rapidly, with total fraud attempts up by 80% over the last three years. This is in part due to the role AI plays in making it easier to complete financial fraud schemes but is also attributable to external factors.
So, what are some of the most common forms of AI-fuelled financial fraud and how does one combat AI fraud at an individual and institutional level?
The majority of AI-aided financial fraud can be categorised as synthetic identity fraud. Under this scam, fraudsters use AI to create fake identities comprised of a combination of real and fake information, before signing up for loans, lines of credit or even applying for benefits.
AI’s ability to quickly identify patterns within large datasets has given fraudsters the ability to create realistic profiles that align with demographic trends. Generative AI is also used in the identity creation process, simulating a realistic credit history. These profiles are therefore near-impossible to distinguish from real people under standard verification checks.
A report from the U.S. Government Accountability Office (GAO) estimates that more than 80% of new account fraud can be attributed to synthetic identity fraud – indicating the vital importance of improving security measures.