PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2025-11-14T00:00:00+00:00

Redefining Enterprise Defense in the Era of AI-Led Cyberattacks

More cybercriminals are turning to using autonomous AI tools to upgrade their attacks, as exemplified by the recent utilization of Anthropic’s Claude Code, prompting an urgent need for enterprises to adopt agentic AI-driven security platforms and proactive defenses to counter AI-related threats.


Artificial Intelligence (AI)

Redefining Enterprise Defense in the Era of AI-Led Cyberattacks

More cybercriminals are turning to using autonomous AI tools to upgrade their attacks, as exemplified by the recent utilization of Anthropic’s Claude Code, prompting an urgent need for enterprises to adopt agentic AI-driven security platforms and proactive defenses to counter AI-related threats.

By: Trend Micro

Nov 14, 2025

Read time: ( words)

Save to Folio


Key takeaways:

  • The AI-driven cyber espionage campaign last September involving Anthropic’s Claude Code tool signals an important shift in the threat landscape, as attackers increasingly use AI and AI agents to automate and scale sophisticated cyberattacks with minimal human intervention.
  • Trend™ Research highlights that criminal adoption of generative AI and agentic AI is evolving incrementally, with cybercriminals favoring tools like jailbroken large language models (LLMs) and deepfake services to lower barriers to entry, increase attack efficiency, and broaden the scope of targeted victims.
  • Agentic AI architectures enable threat actors to automate complex attack chains, rapidly adapt to changing circumstances, and launch persistent, scalable campaigns, challenging conventional security controls and necessitating a shift toward automated, agentic defenses.
  • To effectively counter AI-powered threats, enterprises must invest in agentic AI-driven security platforms, proactively simulate attack scenarios such as using digital twin technology, enhancing threat intelligence and attribution methods, and promoting responsible disclosure practices to stay ahead of AI-powered threats.

Anthropic’s recent disclosure of an AI-orchestrated cyber espionage campaign reflects the broader trend of threat actors using autonomous artificial intelligence (AI) to automate and scale their cyberattacks: The incident involved a China-aligned group that manipulated Anthropic’s Claude Code tool to autonomously target around 30 organizations around the world, including tech companies, financial institutions, chemical manufacturers, and government agencies. The attackers bypassed AI guardrails through jailbreaking techniques, instructing the AI to conduct reconnaissance, develop exploit code, harvest credentials, and exfiltrate sensitive data, all with minimal human intervention. This event underscores the urgent need for enhanced safeguards and industry-wide collaboration to counter increasingly sophisticated AI-powered threats.

What we’re seeing in the threat landscape

Early stages

Trend Micro’s leading research into the criminal adoption of AI reveals a rapidly evolving landscape: Trend™ Research’s analysis of underground forums and marketplaces demonstrates that while cybercriminals were initially slow to adopt generative AI (GenAI) technologies, their interest and activity have accelerated. Early criminal use focused on leveraging AI tools like ChatGPT to assist in coding malware, generating phishing emails, and crafting social engineering campaigns. However, these activities typically involved using AI to improve existing attack methods rather than developing AI-powered malware itself.

A significant trend is the proliferation of so-called criminal large language models (LLMs). Most offerings in criminal circles are not truly custom-trained models, but rather jailbreak-as-a-service frontends – interfaces that use specially designed prompts to bypass the ethical safeguards of commercial LLMs and deliver unfiltered, malicious responses. Notable examples include WormGPT and DarkBERT, which have resurfaced in various forms, often accompanied by claims of new features or capabilities. Many such offerings are scams or simply repackaged interfaces to commercial models, yet the demand for privacy and anonymity among criminals drives continuous development.

Deepfake technologies represent another area of rapid growth. Criminals now offer deepfake services to bypass Know Your Customer (KYC) checks at financial institutions, facilitate scams, and perpetrate extortion. These services have become more affordable and accessible, with offerings ranging from image and video manipulation to real-time avatar generation for fraudulent video calls. The quality and sophistication of these tools are improving, enabling threat actors to target regular citizens and not just high-profile individuals.

Trend’s ongoing research in this area underscores that criminal adoption of AI is marked by incremental evolution rather than revolutionary change. Cybercriminals favor tools that lower barriers to entry and increase efficiency, such as jailbreaking existing LLMs and utilizing deepfake services. The market is also rife with scams targeting other criminals, reflecting the opportunistic nature of the underground that’s ready to seize on emerging AI features. As GenAI capabilities continue to advance, Trend remains vigilant in tracking these developments and advising organizations to strengthen their defenses against increasingly sophisticated AI-driven threats.

Today

Attackers are not only using AI for code generation or jailbreaking LLMs; they’ve progressed to actively integrating AI into the malware itself. Notable cases such as LameHug's (PROMPTSTEAL) use of HuggingFace-hosted AI to craft info-stealing scripts, and how PROMPTFLUX requested obfuscation techniques from Google’s Gemini AI, demonstrate how adversaries are moving past traditional, static malware. Although threat actors may still face challenges like API key revocation and the unpredictability of AI-generated code, the use of AI in cybercrime is poised to increase as attackers continue to explore new ways of exploiting these technologies, making proactive security strategies critical.

While conventional defenses like network segmentation, multi-factor authentication (MFA), and endpoint detection and response (EDR) remain foundational to cybersecurity, these are challenged more and more by AI-powered cyber threats. “Vibe-coded” attacks – which uses AI-generated malicious code that mimics trusted sources – further complicates attribution and signature-based detection, since AI can craft malware fragments that closely resemble legitimate research or imitate the tactics of other threat actors, making it difficult for defenders to distinguish between genuine and malicious activity.

[...]


Original source

Reply