Artificial intelligence has fundamentally changed the cybersecurity threat landscape. Attackers are using large language models (LLMs) and AI tools to generate more convincing phishing emails, write malware at scale, automate reconnaissance, and even create deepfake audio and video for social engineering. This isn’t science fiction — it’s the current reality in 2025.
How Attackers Are Using AI
1. AI-Generated Phishing at Scale
Traditional phishing suffered from obvious grammar mistakes and generic language. AI has eliminated that weakness.
# What AI-generated phishing looks like:
# OLD phishing (easy to spot):
# "Dear Customer, your acount has problem. Click here for fix it immediately!"
# AI-generated phishing (nearly perfect):
# "Hi Sarah, I'm following up on the Q3 budget review we discussed Tuesday.
# Per your request, I've attached the revised forecast. Could you review
# by EOD? The finance team needs sign-off before the board meeting.
# [Download Attachment: Q3_Budget_Forecast_v3.xlsx]"
#
# Key differences:
# - Uses victim's real name (from LinkedIn OSINT)
# - References real events (scraped from social media)
# - Professional tone with no grammar errors
# - Urgent but not panicked
# - Plausible context
2. Malware Generation with LLMs
# Researchers demonstrated WormGPT and similar tools:
# - Stripped-down LLMs without safety filters
# - Generate working malware code on demand
# - Help non-technical criminals write ransomware
# Example capabilities (reported by researchers):
# - Write keyloggers in Python
# - Generate polymorphic shellcode
# - Create obfuscated PowerShell payloads
# - Debug malware and fix errors
# Defense: Focus on behavior, not signatures
# Modern AI-generated malware evades signature-based detection
# EDR solutions that detect behavior (lateral movement, credential access)
# are more effective than pattern-matching AV
3. Deepfake Audio/Video Social Engineering
In 2024, a finance employee at a Hong Kong firm was tricked into wiring $25 million after a video call with deepfake “colleagues” — including a deepfake of the CFO. This technique is now being used in targeted attacks against executives.
# Deepfake attack indicators:
# - Unscheduled video call requests
# - Audio/video quality issues (artifacts, lip sync problems)
# - Unusual payment requests or urgency
# - Unable to call back on known numbers
# Organizational defenses:
# 1. Code words for voice/video verification
# Establish a pre-shared verification phrase
# "Before we discuss financials, what's our team code word?"
# 2. Out-of-band verification for transactions:
# ANY financial request received via call/email must be verified
# via a SEPARATE channel (call back on known number, in-person)
# 3. AI deepfake detection tools:
# Microsoft Video Authenticator
# Intel FakeCatcher
# Sensity AI
4. AI-Automated Vulnerability Scanning
# Attackers use AI to speed up reconnaissance:
# Tools like AutoGPT + security tools can autonomously:
# - Enumerate subdomains
# - Find open ports
# - Identify vulnerable software versions
# - Generate and test exploits
# - Pivot through networks
# Speed advantage:
# Manual recon: hours to days
# AI-assisted recon: minutes
# This dramatically shrinks the time-to-exploitation window
How Defenders Are Fighting Back with AI
AI-Powered Email Security
# Modern email security platforms using AI:
# Microsoft Defender for Office 365 — uses ML for impersonation detection
# Abnormal Security — behavioral AI, detects "this email is unusual for this sender"
# Darktrace Email — unsupervised ML baseline of normal communication patterns
# Key capabilities:
# - Detect AI-generated phishing even without known indicators
# - Identity chain analysis (if this is really from John Smith, why is tone different?)
# - Account compromise detection (login from new location + unusual email = flag)
# Free option: Google Workspace Advanced Protection Program
# Applies hardware key requirement for high-value targets
AI for Threat Detection (SIEM/SOAR)
# Wazuh + ML rules for anomaly detection:
# /var/ossec/etc/ossec.conf — enable ML module:
# <ossec_config>
# <logging>
# <log_format>plain</log_format>
# </logging>
# </ossec_config>
# Microsoft Sentinel — UEBA (User Entity Behavior Analytics):
# Baselines normal user behavior then flags deviations
# KQL query for impossible travel detection:
let timeframe = 1h;
SigninLogs
| where TimeGenerated > ago(timeframe)
| project UserPrincipalName, Location, TimeGenerated, IPAddress
| summarize Locations = make_set(Location), IPs = make_set(IPAddress) by UserPrincipalName
| where array_length(Locations) > 1
// User logged in from two different countries in 1 hour = impossible travel
Defending Against AI-Powered Attacks
# 1. Assume phishing is now undetectable by eye alone
# Focus on: MFA everywhere, email link sandboxing, user training on verification
# 2. Security awareness training must evolve:
# Show employees AI-generated phishing examples
# Teach verification procedures, not just "look for bad grammar"
# 3. Implement FIDO2/hardware keys for critical accounts:
# Even if credentials are phished, hardware keys can't be remotely stolen
# 4. AI-enhanced monitoring:
# Deploy UEBA solutions that detect behavioral anomalies
# Don't rely solely on signature-based detection
# 5. Deepfake policy:
# Any financial request by phone/video requires dual-channel verification
# Document this policy and train all finance staff
Wrap Up
AI hasn’t changed the fundamental nature of cyberattacks — they still rely on credential theft, malware delivery, and social engineering. But AI dramatically lowers the skill barrier for attackers and raises the quality of their attacks. The defender’s response must adapt: move from signature-based to behavioral detection, implement hardware authentication, and evolve security awareness training for the AI age.