Cybersecurity experts share how AI could enhance tax-related scams

Kelly Sikkema via Unsplash
As artificial intelligence (AI) evolves, malicious actors could leverage it to enhance the sophistication, scale, and success rate of tax fraud, scam, and financial cybercrime. Through improved automation and personalization, AI can make scams more difficult to detect.
According to the Internal Revenue Service (IRS), more than $9.1 billion in fraud from tax and financial crimes were identified in 2024. As individuals and organizations alike prepare to file taxes, cyber awareness is essential for maintaining safe and secure information. Below, cybersecurity experts share their insights on AI-related threats and how users can stay safe this tax season.
Security leaders weigh in
J Stephen Kowski, Field CTO at SlashNext Email Security+:
The most prevalent attacks we’re seeing involve links that direct users to cloud collaboration services where malicious files are hosted or legitimate services are impersonated. Attackers are increasingly registering legitimate accounts on trusted platforms and using the platform’s own notification system to deliver phishing attempts, making them harder to detect.
Phishing via text and voice is also on the rise, effectively lowering the barrier of entry for attackers to reach potential victims. AI tools are making it easier for scammers to create convincing impersonations that bypass traditional security measures through perfectly crafted messages. The best defense is implementing separate validation controls — always verify requests through an independent channel rather than responding directly to the message you received. Look for subtle inconsistencies in language patterns and consider using live scanning technology that can analyze content, behavior, and intent to identify malicious elements before you interact with them.
If you suspect that your personal data has been compromised, contact the IRS Identity Protection Specialized Unit immediately and file Form 14039 (Identity Theft Affidavit) to alert them of the situation. Place a fraud alert with one of the three major credit bureaus, which will automatically notify the other two, and consider freezing your credit to prevent new accounts from being opened. The IRS typically doesn’t initiate contact through email, text messages, or social media channels, so any proactive communication through these channels should immediately raise suspicion. Urgency is a major red flag — scammers create artificial time pressure to force quick decisions before you can properly validate the request. Always slow down and independently verify the source through official channels rather than using contact information provided in the suspicious message — this approach catches most sophisticated scams regardless of how authentic they appear.
Patrick Tiquet, Vice President, Security & Architecture at Keeper Security:
In 2025, we’re seeing AI-driven phishing attacks and credential stuffing becoming more prevalent. Cybercriminals are using AI tools to create highly convincing phishing emails, mimicking communications from trusted entities like the IRS or financial institutions. These attacks often target individuals during tax season, capitalizing on the urgency to file taxes and the potential for confusion.
Credential stuffing attacks also continue to rise, where attackers use stolen login information from previous data breaches to access accounts with sensitive tax data. This is especially concerning when individuals reuse passwords across multiple sites. The best defense is ensuring strong, unique passwords for every account. Password managers can help with this, and enabling Multi-Factor Authentication (MFA) is essential. If individuals and businesses employ least-privilege access — limiting who has access to sensitive financial information — they can reduce the likelihood of breaches.
Generative AI and deepfake technology are making tax scams more sophisticated. Cybercriminals can now create realistic video and audio impersonations of IRS agents, tax professionals or even family members, tricking individuals into divulging sensitive information like Social Security numbers or tax credentials. To spot AI-generated content, look for subtle mismatches in tone, unnatural speech patterns or slight inconsistencies in the video. Scammers may also try to pressure you into taking urgent actions — if something feels rushed or too good to be true, it likely is.
Satyam Sinha, CEO and Co-founder at Acuvity:
These days, GenAI is being used for everything to boost productivity. For example, it’s quite easy to upload paystubs to receive a summary of salary information, upload sensitive data, such as your W-2 or financial statements to correlate information. This poses significant risks based on documents being used and the GenAI service, the tools, plugins, and even tier of usage. Everyone should be aware of the risks involved with sharing the content, especially on work devices, and understand that it could be prone to data leakage, training of models with your data, and data residency complications among other things.
GenAI is here to stay and what’s needed is secure and responsible adoption to foster productivity and innovation. As GenAI attack vectors are new, and the consumption of these will only grow, a “ground up” security mindset to tackle the issues brought up by GenAI is needed. They must discover, visualize and formulate policies and protect their organizations.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!