While AI (machine learning) models have been used in fraud management and anti-money laundering (FRAML) before, generative AI (genAI) brings significant new benefits to FRAML initiatives, including improved customer experience, fraud management accuracy, and operational efficiency. Here are some of the key points to keep in mind when it comes to genAI and fraud management. I’ll dig into this topic in detail at a session at our upcoming Security & Risk Summit in December as part of the Identity & Fraud track.
- You have to defend using genAI because fraudsters use genAI to perpetrate fraud. Fraudsters use genAI to create entire fake driver’s licenses and national ID images. They can also paste live video and audio on top of a victim’s legitimate audio and video in real time. Services used to generate deepfakes such as Dall-E, ElevenLabs, and Synthesia are quickly becoming less expensive and easier to use for anyone.
- Deepfake defenses are nascent and rely on genAI, but they’re becoming more powerful. Deepfake defenses are multilayered. They involve spectral video analysis, video artifact analysis, and fake and/or repetitive video background detection. Good deepfake detection tools also ensure good device security posture and hygiene, such as detecting and blocking jailbroken or rooted devices, as well as ensuring the integrity and authenticity of video and audio grabbing on the device. Behavioral biometrics detection (understanding mouse movements, typing patterns, touchscreen gestures, ambient light sensor data, device accelerometer data, and other user and device activity attributes) uses generative AI to predict attack patterns and is critical in detecting deepfakes. Most facial biometrics, physical identity document verification, and voice biometrics vendors are working on embedding the above technologies into their core technology stack to provide integrated protections against deepfakes.
- GenAI automates fraud risk-scoring model management. One of the greatest benefits of genAI is its ability to generate rule recommendations for risk-scoring models based on data like past transactions, investigator decisions, and consortium data. ML models also benefit from genAI: Data scientists can use generative adversarial networks (GANs) to pit genAI against an operational risk-scoring ML model and create synthetic fraudulent activities to build defensive tuning into the risk-scoring model. This automation lowers the overall cost of model development and allows human data scientists to focus on more creative activities such as model selection and discovering new AI and ML algorithms.
- Modern know-your-customer (KYC) processes build on genAI. KYC is all about finding entities (names, addresses, etc.) on watchlists; discovering relationships between entities; translating those discovered relationships into entity risk scores; and predicting risk scores of new transactions. Using large-language and transaction models, genAI provides natural language processing of control lists (OFAC), adverse media, and politically exposed persons (PEP) lists to filter out red herrings. GenAI can also provide copilot functionality for newbie investigators and offer easy regulatory and operational report generation.
- GenAI delivers synergistic business value in fraud management and anti-money laundering (AML). Fraud management and AML professionals are continually struggling with: 1) continuous KYC or KYC checks triggered by changes to account data; 2) improvement of risk-scoring models without downtime; and 3) lowering the cost of investigation and compliance reporting. GenAI promises to help financial institutions do more with less. GenAI’s key areas of business value in the enterprise fraud management (EFM)/AML space include:
-
-
- Risk-scoring efficiency improvements to reduce labor costs.
- Improved false positive identification.
- Better models and investigation efficiency improvements, resulting in lower fines.
- Improved protections against (genAI-generated) deepfakes.
-
6. GenAI usage in EFM and AML still introduces risks. Of course, there are some risks and downsides to using genAI in EFM and AML. They include the following:
-
-
- Governance of genAI models is harder than traditional AI.
- GenAI’s explainability is unclear as copyright infringement lawsuits are pending.
- GenAI’s probabilistic nature hinders repeatability and may cause hallucinations.
- IP and privacy protections are difficult to enforce.
-
These six points should give you a good understanding of where genAI can fit into your fraud management strategy. To learn how they may apply to your specific organization or business, be sure to come to our session at Security & Risk Summit on December 9–11 in Baltimore. I’ll also be on a panel discussing all things identity and fraud. Check out the agenda here and hope to see you in Baltimore.