Targeted by AI: The Dark Side of Open Tools

Targeted by AI: The Dark Side of Open Tools
May 20, 2025 sdcpm
Targeted by AI - TorchStone Global

Targeted by AI: The Dark Side of Open Tools

By GSOC Manager, Dan Libby

The democratization of artificial intelligence has ushered in unprecedented capabilities in productivity, creativity, and communication. But as with any disruptive technology, the potential for misuse has followed closely behind. In the evolving landscape of personal and executive security, threat actors are now turning to Open Artificial Intelligence or OpenAI systems, like publicly available chatbots and summarization tools, to expedite, enhance, and obscure their targeting efforts.

How AI Changes the Physical Security Landscape

OpenAI systems can synthesize data from press appearances, social media, public property records, and professional biographies to assist malicious actors in building target profiles, pretexts, timelines, and access scenarios, all without leaving their homes.

What once took weeks of manual research, reconnaissance, and surveillance can now be shortened significantly by:

  • Instructing AI to summarize a target’s PII, known affiliations, and travel patterns
  • Asking AI to describe routine behaviors of people in similar roles
  • Generating realistic-sounding pretext messages (e.g., interview requests, event invitations)
  • Identifying lower-security environments based on target behavior and public routines

This process doesn’t require nation-state resources or insider knowledge; it simply requires intent, an internet connection, and a few prompts.

Real-World Case Study: The Tesla Truck Bomber – January 1, 2024

On January 1, 2024, Matthew Livelsberger parked a Tesla Cybertruck outside the front entrance of the Trump hotel in Las Vegas. He detonated a collection of firework mortars and camp fuel canisters, attempting to blow up the Cybertruck. Though the devices failed to detonate properly, the attack was described by local officials as an attempted mass casualty event. A Las Vegas Metropolitan Police Department press release stated, “Livelsberger used advanced Generative Artificial Intelligence technology to research explosives and ignition mechanisms.

This incident underscores a chilling reality: individuals can now use AI tools to plan high-profile attacks with speed, depth, and precision, decreasing the need for traditional surveillance skills or insider access. Despite the ethical guardrails built into platforms like OpenAI, motivated users have found ways to manipulate these systems through indirect prompts, analogies, or scenario-based queries that skirt content restrictions. In the wrong hands, these tools become digital accomplices, accelerating the path from fixation to action, a process we refer to as the attack cycle.

And while the Las Vegas incident demonstrates the potential for AI to assist in physical attack planning, an equally insidious threat lies in how these same tools can be weaponized for deception, through the exploitation of social media footprints and publicly available data to engineer trust, access, and opportunity.

AI-Generated Images and AI-Assisted Social Engineering: Building Believable Pretexts

Recent investigations into online romance scams and catfishing networks have revealed the increasing use of AI to create sophisticated and deceptive personas. In one case, highlighted by the BBC in Hunting the Catfish Crime Gang, a criminal network used AI-generated photos to build fake dating profiles that impersonated a British journalist. These synthetic images, produced using generative adversarial networks (GANs), created the illusion of real individuals, complete with subtle facial imperfections and realistic lighting, making them far more convincing than traditional stock photos or stolen identities. Because these images were unique and AI-generated, they could evade reverse image searches and trigger a high level of trust among unsuspecting victims.

However, while the AI-generated images served as the primary visual hook, it is plausible that such images could be combined with other AI-assisted social engineering techniques to construct more elaborate and believable pretexts. Threat actors could use AI-driven language models to generate emotionally resonant messages tailored to a target’s values, interests, or vulnerabilities, particularly when informed by insights gathered from public social media footprints. These pretexts could be further enhanced with AI-generated backstories, such as fabricated career histories, travel narratives, or shared affiliations with recognizable organizations. In more advanced cases, AI tools might even be used to replicate writing style or tone, creating interactions that feel authentic and human. While the BBC investigation did not confirm the use of these specific techniques, they represent a credible evolution of current AI capabilities that significantly increase the effectiveness and believability of such scams.

This convergence of AI tools forms a new threat paradigm in social engineering. When used maliciously, they don’t just impersonate people, they build entire realities around those personas. These multi-layered narratives, rooted in synthetic visuals and context-aware messaging, are designed to cultivate trust and manipulate targets into real-world action, whether that means meeting in person, sharing sensitive information, or compromising physical security. For executive protection teams, security professionals, and threat analysts, this highlights the growing need to account for AI-enabled pretexting as a credible and scalable method of attack, capable of enabling access, movement manipulation, or personal targeting in the physical world.

Who Is Most at Risk?

 High-net-worth individuals with routine media presence or lifestyle posts

  • Public-facing executives and celebrities with real-time social media engagement
  • Philanthropic figures who attend or sponsor open events
  • Support staff, assistants, drivers, or security contractors who are easier to deceive or impersonate
Strategies to Mitigate the Risk

1. Restrict Real-Time Exposure

  • Post photos or event highlights after the fact, not live
  • Avoid tagging current locations on social platforms

2. Conduct Routine OSINT Sweeps

  • Use professional tools to check what AI, data aggregators, and search engines can collect about the potential target
  • Identify overexposed affiliations or predictable behavior

3. Harden The Inner Circle

  • Train staff to verify all requests, no matter how flattering or urgent
  • Introduce validation codes or callbacks before any calendar, location, or access changes (or before sending calendar information or making money transfers.

4. Deploy AI Defenses Against AI Threats

  • Use email filtering systems that detect AI-written communication patterns
  • Install behavioral firewalls that flag imposter language or tone shifts

5. Establish a Security Baseline

  • Know where the target’s exposure is greatest: home, travel, online, or professional
  • Regularly review personal and family digital habits with a threat analyst
The New Arms Race

 AI isn’t just revolutionizing business or creativity; it’s now part of the threat model. The same tools that generate business pitches and marketing copy are now being used to generate trust, simulate authenticity, and deceive high-profile targets.

The risks aren’t hypothetical. They are happening now. If you’re a public-facing figure, executive, or security professional tasked with protecting one, it’s time to move beyond traditional digital hygiene. This is threat modeling for the AI age, and the rules are changing faster than many organizations realize.