By now we all know Generative AI (GenAI) systems can produce humanlike text, code, images and even voices. This capability is transforming how attackers operate and how defenders protect their businesses – including their external attack surfaces (what’s that? Everything an organisation exposes to the internet, including domains, web applications, DNS, email and cloud services). This blog explores how GenAI is reshaping the threat landscape, illustrates real-world incidents and outlines defensive strategies suitable for CISOs and IT managers.
How Attackers Weaponise Generative AI
AI generated phishing and business email compromise
Phishing remains the most common initial attack vector in cyber incidents, and GenAI is making it more convincing and scalable. Reports from security vendors note that GenAI platforms allow malicious actors to write polished phishing emails and set up fake websites that impersonate legitimate organisations. In 2024, 75% of cyberattacks began with a phishing email and 67.4% of phishing campaigns used some form of AI. Mostly for dramatically reducing spelling mistakes and increasing target-specific tailoring. Beyond just correct grammar, attackers are now using GenAI to conduct 'hyper-personalised' spear phishing at scale. These systems can scrape professional networking sites, company announcements, and social media to craft emails that reference specific projects or internal jargon, making them exceptionally difficult to spot.
“Malicious LLMs” such as WormGPT or FraudGPT (models trained without safeguards) are openly advertised in underground forums and can generate contextual phishing lures or malware code. A 2025 threat report from Abnormal AI details how generative AI enables attackers to create sophisticated business email compromise (BEC) campaigns.
Attackers can hijack existing email threads and have a GenAI system draft believable invoices or wire transfer requests with proper grammar and contextual understanding. An example in the report shows a BEC email that seamlessly continued a legitimate conversation, making it almost indistinguishable from real correspondence.
Microsoft observed a live phishing campaign in October 2025 that used LLM-generated HTML/SVG code to disguise malicious links inside a fake PDF invoice. The messages showed tell-tale features of AI-authorship such as over-long variable names and boiler-plate comments. (Microsoft blocks phishing scam which used AI-generated code to trick users)
Deepfakes and voice/video impersonation
Generative models for audio and video allow attackers to clone voices or create synthetic avatars of executives. In early 2024 a multinational firm lost ≈USD$25.6 million after an employee was tricked during a video conference where deepfake participants impersonated the CFO and colleagues. These attacks have evolved from one-off fraudulent calls to sophisticated, multi-stage campaigns. In a widely reported mid-2025 incident, a European energy firm lost over €45 million after attackers used AI-cloned voices of executives over several weeks to build a rapport with the finance team, grooming them for the eventual transfer request. This demonstrates a new level of patience and psychological manipulation powered by GenAI.
Deepfake fraud attempts increased by 3,000% from 2022 to 2023, and Gartner predicts that by 2026 deepfake attacks will erode trust in facial biometric authentication. Attackers also use AI voice cloning and phone spoofing tools to impersonate executives in vishing scams.
AI generated domains and DNS abuse
Generative AI is also used to create plausible domain names that evade traditional detection. Unit 42 (Palo Alto Networks) and other researchers observed cyber squatters feeding keywords into GenAI models to output numerous domain names that closely mimic legitimate brands. These AI generated domains are leveraged for phishing, malware distribution and brand impersonation. During preparations for the 2026 FIFA World Cup, security firm BforeAI uncovered batches of malicious domains following algorithmic patterns, suggesting AI assistance.
At a broader scale, Infoblox’s 2025 DNS threat report noted that 25.1% of 100.8 million newly observed domains were malicious or suspicious. Ninetyfive percent of threat-related domains appeared in only one customer’s environment, illustrating how difficult it is to detect them through shared blocklists.
AI assisted malware and exploit generation
Generative models trained on code can produce working malware or exploit scripts. Rapid7 warns that attackers use GenAI to automate reconnaissance, craft exploits and generate polymorphic code that evades detection. AI tools can also write infrastructure as code scripts or container definitions to spin up malicious infrastructure quickly. Malicious LLMs like WormGPT advertise the ability to produce ransomware or obfuscation code, lowering the barrier for less skilled attackers.
Indeed MalGEN (Saha & Shukla, 2025) demonstrated recently multi-agent generative frameworks that automatically design, build and test novel malware families until antivirus detection rates fall below a chosen threshold. Several samples bypassed leading EDR products in lab tests. (MalGEN: A Generative Agent Framework for Modeling Malicious Software in Cybersecurity)
Poisoning and Injecting Corporate AI Tools
As organizations integrate GenAI into their own tools, from customer-facing chatbots to internal knowledge bases, a new vulnerability has emerged: LLM injection. Attackers can craft malicious inputs that trick an organization's own AI into leaking sensitive data, ignoring safety protocols, or performing unauthorized actions. For example, a cleverly worded customer query could cause a service bot to reveal other customers' account details.
Defensive Uses of Generative AI and Emerging Solutions
While GenAI creates new attack opportunities, it also offers defenders powerful tools when used responsibly.
AI powered email and fraud detection
Modern email security platforms are turning to AI to combat AIgenerated phishing. Abnormal Security uses behavioural baselines to identify anomalies in how users communicate, rather than relying on static signatures. Barracuda Networks recommends augmenting spam filters with models trained on AIgenerated phishing content. Beyond technical controls, organisations must conduct regular awareness training that prepares employees for sophisticated voice or videobased scams.
AI driven external attack surface management (EASM)
External Attack Surface Management platforms continuously discover and monitor internet facing assets. Glasstrail integrates AI to provide AIpowered analysis to make the findings easier to understand and act on.
Predictive DNS security and domain protection
Because malicious domains can appear and disappear quickly, defensive tools now incorporate predictive analytics. Infoblox recommends preemptive blocking of domains based on DNS telemetry, as many threatrelated domains are unique per victim. Organisations should also strengthen DNS configurations (DNSSEC, SPF, DKIM, DMARC) and monitor for typosquatting.
Recommendations for CISOs and IT Managers
- Adopt an AIaware threat model. Recognise that attackers can now generate highquality phishing emails, deepfake voices and convincing domains at scale.
- Strengthen email and collaboration security. Deploy AIbased email security platforms that analyse behaviour and linguistic patterns. Make sure email hygiene is great with DMARC, SPF etc.
- Establish Out-of-Band Verification for High-Stakes Requests. In response to sophisticated deepfake attacks, enforce strict policies for verifying financial transfers. This should include mandatory confirmation using a pre-established, trusted channel or the use of pre-shared code words for verbal verification, as recently advised by CISA.
- Implement EASM and continuous discovery. Use EASM tools to build a comprehensive inventory of internetfacing assets and take advantage of AIassisted query and proactive threat modeling capabilities.
- Monitor DNS and domain registrations. Subscribe to predictive threat intelligence that flags newly registered or AIgenerated domains related to your brand and enforce DNS hygiene.
- Validate AI outputs and maintain human oversight. Use generative AI tools to assist with summarisation and triage, but review all recommendations before acting due to the risks of hallucination.
- Secure Your Own AI Deployments. If you are deploying customer-facing or internal GenAI tools, treat them as a critical part of your attack surface. Implement rigorous input validation and sanitization to defend against prompt injection attacks.
Conclusion
Generative AI is reshaping the cyber landscape. Attackers leverage it to craft convincing phishing emails, create realistic deepfakes, and automate exploit development. At the same time, defenders can harness AI for behavioural anomaly detection, naturallanguage investigation of external assets, and predictive threat intelligence. Effective external attack management in this new era requires combining AIenabled tools with rigorous governance and human expertise.
References
- Abnormal Security (2025). Future of Email Security: The 2025 Threat Report.
- BforeAI (2024). Research on Algorithmic Domain Generation for the 2026 FIFA World Cup.
- BreachLock Inc. Guidance on External Attack Surface Management (EASM).
- CISA (Cybersecurity & Infrastructure Security Agency) (2025). Advisory on AI-Enhanced Social Engineering.
- CyCognito Labs (2024). Can You Trust Generative AI for Cybersecurity Advice?
- Gartner, Inc. (2024). Predicts 2024: The Rise of Generative AI.
- Infoblox (2025). 2025 DNS Threat Report.
- Microsoft Security (2024). "Microsoft Defender EASM now with Copilot."
- Palo Alto Networks, Unit 42. Research on AI-Generated Domain Abuse.
- Rapid7 (2024). 2024 Mid-Year Threat Report.
- Verizon (2025). 2025 Data Breach Investigations Report (DBIR).
- Li, H., Gao, H., Zhao, Z., Lin, Z., Gao, J., & Li, X. (2025). LLMs caught in the crossfire: Malware requests and jailbreak challenges [Preprint]. arXiv. https://arxiv.org/abs/2506.10022 (LLMs Caught in the Crossfire: Malware Requests and Jailbreak Challenges)
- Saha, B., & Shukla, S. K. (2025). MalGEN: A generative agent framework for modelling malicious software in cybersecurity [Preprint]. arXiv. https://arxiv.org/abs/2506.07586 (MalGEN: A Generative Agent Framework for Modeling Malicious Software in Cybersecurity)
- Microsoft Threat Intelligence. (2025, 30 Sep). Microsoft blocks phishing scam which used AI-generated code to trick users. TechRadar Pro. https://www.techradar.com/pro/microsoft-blocks-phishing-scam-that-used-ai-generated-code-to-trick-users (Microsoft blocks phishing scam which used AI-generated code to trick users)
- Pakaluk, A. (2025, 22 Apr). Cybercriminals are winning with AI—Here’s how they’re using it. Forbes Technology Council. https://www.forbes.com/... (Cybercriminals Are Winning With AI—Here’s How They’re Using It)
- Check Point Research. (2025, 10 Feb). Threat intelligence report. https://research.checkpoint.com/2025/10th-february-threat-intelligence-report/ (10th February – Threat Intelligence Report - Check Point Research)
- Cybersecurity & Infrastructure Security Agency (CISA). (2024, 15 Apr). Joint guidance on deploying AI systems securely. https://www.cisa.gov/... (Joint Guidance on Deploying AI Systems Securely | CISA)
