#32 – AI
Artificial intelligence is everywhere and is already causing massive disruptions across many industries. In the right hands and with proper direction, AI poses wide-ranging benefits for businesses and individuals. Unfortunately, bad actors are utilizing rapidly developing AI to commit large-scale fraud. Identity thieves and other cybercriminals use this new technology to effectively implement new scams or refine old tactics designed to steal personal information and other valuable data.
How AI can be used to commit identity theft and other forms of fraud is evolving as fast as the tech itself. This leads to a steady cause for concern as defenses against new frauds, scams, and criminal strategies develop alongside. At its core, AI has supercharged scammer’s traditional tactics by making the process of collecting valuable personal information (SSNs, credit card numbers, email addresses, passwords, etc.) more straightforward and effective. Once that information is stolen through the use of artificial intelligence, it can be misused by bad actors in any number of ways.
In addition to providing an easy avenue for personal data theft and exploitation, AI-related identity theft is proving to be more difficult to remediate and recover from. This is primarily due to the complexity of AI-based systems implemented to steal data or commit fraud. However, AI models are also being used to help detect and safeguard against fraud, providing hopeful possibilities for data protection’s future.
Current Instances of AI Identity Theft
Although no single set of parameters constitutes AI identity theft, looking at current examples of how criminals use and develop the technology provides insight into its growing prevalence.
- Deepfake Impersonations: AI-powered deepfake technology allows scammers to create highly convincing videos or audio recordings of individuals, often public figures or C-suite executives. These manipulated media can be used to trick, blackmail, or impersonate targets for financial gain or reputational damage, leading to fraud and identity theft. The first widely documented occurrence of AI fraud involved deepfake tactics.
- Credential Stuffing Attacks: AI-driven bots can automate large-scale credential stuffing attacks using stolen username and password combinations to gain unauthorized access to various online accounts, including email, social media, or banking. These attacks rely on AI algorithms to attempt numerous combinations rapidly, drastically increasing the chances of success compared to human-powered hacking.
- Chatbot Scams: AI-driven chatbots can mimic customer support representatives or friends and family members, engaging in conversations with unsuspecting victims to extract sensitive information or manipulate them into fraudulent actions, such as transferring money or revealing personal details.
- Synthetic Identity Fraud: AI enables the creation of synthetic identities by combining real and fake personal information to establish financial accounts, obtain loans, or open credit cards. These synthetic identities often go undetected for some time, accumulating fraudulent activities that can harm individuals and financial institutions.
- AI-Enhanced Phishing: AI-powered phishing attacks can generate highly convincing and targeted email messages, often using personal information scraped from social media or the dark web. These emails deceive recipients into clicking on malicious links, downloading malware, or revealing confidential information.
- Credit Scoring Manipulation: Fraudsters can use AI to manipulate credit scoring algorithms, artificially inflating credit scores to obtain loans or credit cards under false pretenses. This can lead to financial institutions lending to high-risk individuals who are more likely to default on their loans.
- Criminal-Created AI Tools: Cybercriminals are creating AI tools that work like other widely used public AI models. FraudGPT is one example of this, and it mimics ChatGPT in design and ease of use. This bot is being sold on the Dark Web to bad actors looking for easy access to effective scam email campaigns, information on how to implement a malware attack or tools for better hacking.
The above represents some significant examples of how AI is being used to facilitate identity theft and fraud, but it’s not an exhaustive list. The rapidly developing nature of AI in all its forms means that the scams associated with it will continue to evolve. Effective cybersecurity tactics and response plans are critical to remaining a step ahead of the many AI-related threats and the criminals behind them.
Credit monitoring will not alert you to this type of fraud.
LibertyID will take the following steps for/with their members:
- Place fraud alerts at all three credit reporting agencies
- Place credit freezes at all three credit reporting agencies, if appropriate
- File report with FTC
- Review credit reports with the victim to ensure there are no other types of fraud.
- Provide single bureau credit monitoring with alerts for 12 months.
- Periodically contact the member throughout the 12 months following the resolution of their ID theft recovery case, if warranted.