AI a Boon or Curse for Fraud Risk Management?
– by Sriram Natarajan & Paresh Ashara

March 5, 2025

Home>>Posts>>Newsroom>>AI a Boon or Curse for Fraud Risk Management? – by Sriram Natarajan & Paresh Ashara

Synopsis

In the face of rapidly evolving AI technologies, fraudsters are leveraging tools like deepfakes to execute highly sophisticated scams, posing significant challenges for traditional banking defenses. The authors suggest that risk management practitioners should implement specific strategies such as employing advanced machine learning algorithms for fraud detection, enhancing employee training to recognize AI-driven scams, and investing in robust cybersecurity measures to safeguard their organizations against potential financial losses.

Introduction

Bank account and payment scams are, unfortunately, a common way for criminals to gain access to people’s personal and financial information. Recent data predicts that the global cost of cybercrime – including fraud, scams, identity theft, data breaches, ransomware, and more – is projected to hit an annual USD $10.5 trillion in 2025, according to Cybersecurity Ventures.

The emergence of artificial intelligence (AI) has transformed various sectors, but it has also provided fraudsters with powerful tools to enhance their scams. Risk leaders in organizations must adapt to this evolving landscape by adopting defensive strategies against AI-driven fraud.

The Rise of AI in Fraud

Fraudsters are quick to adopt AI technologies, utilizing them to create sophisticated scams that can bypass traditional banking infrastructures. AI tools enable scammers to automate and personalize attacks, making it easier to deceive victims on a large scale by leveraging deepfakes. For example:

Deepfake Technology: Scammers are increasingly using deepfake technology to impersonate individuals, including family members or executives, to manipulate victims into transferring money or divulging sensitive information. This technology allows scammers to create convincing audio/videos of people saying or doing things they never actually did. Deepfakes are being used for various scams, such as:

Figure 1. Deepfake Scams

1. Impersonating Company Executives to trick employees into transferring funds. E.g. Deepfake CFO tricks employee into transferring more than USD $25M.

2. Faking Celebrity Endorsements for crypto scams. E.g. Get rich quick scheme uses deepfake technology to impersonate Elon Musk.

3. Creating Fake Social Media Profiles to perpetrate romance scams. E.g. The Tinder Swindler.

Generative AI for Fake Content: Fraudsters can use AI to generate convincing fake texts, images, and videos, which can be employed to create fraudulent advertisements or impersonate individuals. This capability allows for the creation of synthetic identities that can evade Know Your Customer (KYC) protocols and facilitate account takeovers.

AI scams often target customers directly, bypassing established banking systems, which is a significant shift from traditional fraud management. But now as customers have become ‘digital natives’, it is easier for them to reach out to customers directly. In fact, many a time, the fraudsters pose as employees of the bank! This shift poses significant challenges for financial institutions and risk leaders.

Defensive Strategies for Risk Leaders

To counter these evolving threats, risk managers have several strategies at their disposal. With deepfakes becoming more realistic and challenging to spot, it has become more effective to fight AI with AI. Compared to humans, AI is better at identifying sophisticated attack vectors such as deepfakes.

Enhanced Fraud Detection Strategies

Figure 2. Enhanced Fraud Detection Strategies

1. Adopting AI for Fraud Detection: The most obvious answer is to use AI (or better AI) to deal with fraudulent AI. Implementing AI and machine learning solutions can enhance real-time monitoring and detection of fraudulent activities.

To combat such threats, financial institutions must adopt a multi-layered approach. Some examples of successful AI-driven countermeasures solutions:

a. Wells Fargo Bank uses behavioral biometrics to analyze patterns in user behavior, such as typing speed and mouse movements, which are unique to each individual and difficult for fraudsters to mimic.

b. HSBC Bank has implemented multi-factor authentication (MFA) and liveness detection to ensure that the person on the other end of the camera is indeed a live individual and not a deepfake.

c. Barclays has adopted a multi-layered approach that includes MFA, liveness detection, and real-time fraud detection systems using machine learning algorithms to detect unusual patterns and flag suspicious activities.

Liveness detection technology is designed to distinguish between real, live human beings and static images or videos. This technology can detect subtle movements, such as blinking or changes in facial expression, that are difficult for deepfakes to replicate convincingly.

2. Cross-Industry Collaboration: Financial institutions can benefit from sharing data and insights to create a more robust defense against fraud. Collaborative efforts can help standardize data-sharing protocols and enhance collective security measures across the industry.

a. Australia’s Scam Prevention Framework: Introduced by the Australian government, this framework fosters collaboration among financial institutions, telecom companies, and digital platforms to share information on scam trends and emerging threats.

b. Nacha’s New Rules for ACH Payments: In the U.S., Nacha, which oversees ACH payments, is set to implement new rules in mid-2026 aimed at enhancing cooperation in fraud detection. These rules will require both sending and receiving financial institutions to adopt procedures for handling suspicious ACH credits, encouraging a collaborative approach to combat unauthorized transactions.

3. Customer Education and Engagement: Building awareness among customers about potential fraud risks is crucial. Financial institutions should provide regular updates and training to help customers recognize and report suspicious activities.

a. Bank of America:

  1. Educational Campaigns: The bank launched comprehensive educational campaigns, including guides, videos, and webinars, to inform customers about various fraud risks, including phishing and deepfake scams.
  2. Real-Time Alerts: They implemented a system to send real-time alerts to customers about potential fraud threats, providing tips on how to verify the authenticity of communications.

b. Troy Bank & Trust:

  1. #FraudFridays: This bank uses social media to post weekly updates under the hashtag #FraudFridays. These posts educate customers about trending scams and provide tips on how to avoid them.
  2. Workshops and Webinars: They host regular fraud awareness workshops and webinars to engage with customers directly and answer their questions about fraud prevention.

4. Investing in Talent and Technology: Hiring skilled personnel and investing in advanced fraud detection technologies are essential for staying ahead of fraudsters. Continuous training and development of internal teams can foster a culture of vigilance and adaptability.

Preparing for Future Threats

While AI scams pose a significant challenge today, risk leaders must also be mindful of the potential impact of quantum computing in the future. Quantum computers have the power to break many of the cryptographic algorithms currently used to secure financial transactions and sensitive data. Investing in quantum-resistant cryptography and staying informed about the latest developments in this field will be crucial for maintaining security in the years to come.

As AI continues to evolve, so do the tactics employed by fraudsters. Risk leaders must be proactive in adapting their defenses to protect both their organizations and their customers from the growing threat of AI- enhanced scams. By investing in technology, training, and awareness, they can better navigate this complex landscape and mitigate the risks associated with AI-driven fraud.

Authors

Sriram Natarajan

President of Quinte Financial Technologies

Sriram is the president of Quinte Financial Technologies Inc. New York. Sriram has over 30 years of experience in financial services for credit unions and payment processors. Sriram’s extensive experience in the credit and risk industry includes positions with several highly respected organizations, including American Express, HSBC, the National Bank of Kuwait, and GE Money. He holds several professional titles, including Certified Public Accountant, Chartered Global Management Accountant, and Certified Fraud Examiner.

Paresh Ashara

Vice-President at Quinte Financial Technologies

Paresh is a Vice-President at Quinte Financial Technologies, managing Data Analytics service line. He brings over 26 years of IT services and product engineering experience in the BFSI vertical. He is passionate about data analytics and takes active interest in discussing business solutions with clients and sharing knowledge with academia.

Source: This article was originally published in “Intelligent Risk by PRMIA” on February, 2025

Reference Links

  1. Chen, H. and Magramo, K. (2024, February 4). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer.’ CNN. https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
  2. Unusual CEO Fraud via Deepfake Audio Steals US$243,000 From UK Company. (2019, September 5.) Trend Micro. https://www.trendmicro.com/vinfo/us/security/news/cyber-attacks/unusual-ceo-fraud-via-deepfake-audio-steals-us-243-000-from-u-k-company
  3. Unusual CEO Fraud via Deepfake Audio Steals US$243,000 From UK Company. (2019, September 5.) Trend Micro. https://www.bbb.org/article/scams/27185-bbb-scam-alert-get-rich-quick-scheme-uses-deepfake-technology-to-impersonate-elon-musk
  4. Sarner, L. (2022, February 2). The devious tactics ‘Tinder Swindler’ used to con singles out of $10M. NY Post. https://nypost.com/2022/02/02/i-was-in-love-with-the-tinder-swindler/
  5. Biometric Authentication Questions. (n.d.). Wells Fargo. https://www.wellsfargo.com/help/security-and-fraud/biometric-faqs/
  6. Security Center. (n.d.). Bank of America. https://www.bankofamerica.com/security-center/overview/
  7. HSBC To Add Precautionary Measures For Accounts In View Of Enhancing Safety Features. (2022, October 5). Business Today. https://www.businesstoday.com.my/2022/10/05/hsbc-to-add-precautionary-measures-for-accounts-in-view-of-enhancing-safety-features/
  8. Fraud and Cybersecurity. (n.d.). Bank of America. https://business.bofa.com/en-us/content/fraud-prevention-and-cyber-security-solutions.html
  9. Troy Bank & Trust. (August 2024). LinkedIn. https://www.linkedin.com/posts/troybankandtrust_have-you-received-an-email-recently-and-you-activity-7227668707689254912-7DVS/

Recent Posts

  • April 17, 2025

  • April 16, 2025

  • March 24, 2025

  • March 5, 2025

  • August 23, 2024

Share This Article!

How can we help?

Learn how financial institutions of all types apply our solutions.

Get in Touch

Request for Information