Next Phase of AI in the Threat Landscape

Next Phase of AI in the Threat Landscape
December 2, 2025 sdcpm
Next Phase of AI - TorchStone Global

Next Phase of AI in the Threat Landscape

By TorchStone Intelligence Analyst, Jane Morency

In May 2025, TorchStone’s GSOC Manager Dan Libby wrote about how the rise of accessible artificial intelligence (AI) platforms has introduced a new dimension to the threat landscape: attackers are increasingly leveraging AI systems, including publicly available chatbots and summarisation tools, “to expedite, enhance, and obscure their targeting efforts.” Dan highlighted two recent cases: one in which AI assisted attackers in researching explosives and ignition mechanisms, and another in which AI-generated images were used to evade reverse image searches and build trust in online romance scams and catfishing schemes.

While these cases underscore AI’s expanding role in enhancing attacker capabilities, recent developments exemplify how this technology has escalated the potential for harm even further, beyond assistive uses. Attackers are now using AI to generate functioning malware code to conduct cyber-attacks, creating yet another new vector in the ever-evolving threat landscape.

AI-Generated Malware Warnings Mounting

In October, TorchStone analysts alerted clients with interests in Taiwan to a recently released Dark Reading report describing how “Drop Pitch,” a China-aligned hacking group, has been experimenting with AI-assisted cyberattacks targeting Taiwan’s semiconductor, finance, academic, and policy research sectors. Drop Pitch reportedly used easily accessible AI tools, including ChatGPT and open-source vulnerability scanners, to develop a custom backdoor into targeted organizations’ networks. In this case, the attacks were largely unsuccessful as their AI-generated malware contained critical flaws. Although these attempts fell short, they exemplify how threat actors are rapidly exploring the use of generative AI tools to enhance cyber-criminal operations, posing future risks to global industries and digital infrastructure.

Other key players in the AI field are also sounding alarm bells on this issue. Earlier this month, both Google and Anthropic released reports detailing their latest findings and industry-wide concerns on AI-enabled malware developments. These reports were made in response to incidents involving the usage of their own large-language models (LLMs) earlier this year.

Google’s Findings: Malware That Talks to AI

In June, Google’s threat investigators spotted a well-known Russian hacking group, APT28, using a new malware tool called PROMPTSTEAL in attacks on Ukraine. PROMPTSTEAL is designed to steal information, and it does so, in part, by sending questions to an LLM to generate hacking instructions. This incident marked the first time Google saw malware communicating with an AI model during an attack to generate operational commands in real time. Google further identified four new malware tools that leverage AI to conceal their code, generate attack capabilities on demand, and produce dynamic code scripts. While the technology is still developing, it signals a meaningful shift toward increasingly self-directed and flexible malware.

Anthropic’s Findings: An AI-Driven Espionage Campaign

Anthropic’s report was also prompted by the identification of an AI-driven intrusion campaign. In mid-September 2025, they uncovered an espionage operation likely conducted by a Chinese state-sponsored hacking group that exploited the Claude Code tool to infiltrate approximately thirty targets; the attackers succeeded in only a few cases. According to the report, “the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically.”

To limit further harm, Google and Anthropic promptly disabled the accounts/assets connected to the observed malicious activity.

So, What Next? Adapt, Prepare, Defend.

Corporations, executives, and high-net-worth individuals already pose prime targets for cyber threat actors. The technological developments detailed above improve attackers’ ability to craft highly tailored intrusion attempts faster and at a much larger scale, adding to their already enhanced capacity to personalize attacks and evade detection through AI-assisted phishing and deep-fakes. Given these heightened risks, several mitigation techniques can help reduce exposure:

  1. Update security briefings and training to include AI-driven threats, such as deepfakes and personalized phishing attempts.
  2. Run regular red team exercises that simulate AI-generated phishing emails and/or malware deployment to help teams recognise and respond to emerging tactics.
  3. Strengthen your personal security by reducing unnecessary online exposure, making it harder for attackers to collect information about you.
  4. Pair personalized, human-led threat assessments with advanced AI security tools to better detect and respond to potential breaches.

In order to stay ahead of escalating threats, executives and their security teams must treat AI-driven malware as a strategic risk and prepare for emerging attacks by developing detection programs, conducting tabletop exercises, and improving overall organizational security hygiene now. TorchStone can help your organization build and implement these capabilities by providing tailored assessments, regular threat monitoring and reporting, and topical training exercises.

Cybercriminals will not shy away from using every tool available to orchestrate attacks, so defenders must be equally adaptive to keep pace with this rapidly evolving threat.