Implications of AI for Corporate Security

Implications of AI for Corporate Security
March 19, 2024 sdcpm
Implications of AI for Corporate Security - TorchStone Global

Implications of AI for Corporate Security

By TorchStone VP, Scott Stewart

On February 29, I was honored to serve as the moderator for a panel on “The Rise of AI and its Impact on Corporate Security” at the 2024 Ontic Summit. The panel not only provided me with a reason to focus my own thoughts on the topic, but to also learn from the insights of the other members of the panel, Phil Capizzi of the Secure Community Network, Keith White, the Chief Security Officer at Salesforce, and Brian Tuskan, VP and Chief Security Officer at ServiceNow. I am indebted to the three panelists for the way they helped solidify and shape my thoughts on the topic during our panel preparation discussions.

AI Will Not Replace People

I have been in the intelligence and protective intelligence field for nearly four decades. Over that time, I have witnessed a lot of technological changes. When I first was assigned to the Diplomatic Security Service Counterterrorism Investigations Branch (later renamed Protective Intelligence Investigations), we were in the process of moving from file folders to a digitized case management system that was accessed through Wang computer terminals which featured monochrome green text on a black background. Since that time, computers have evolved to become an integral component of nearly every aspect of security and investigations.

Artificial intelligence, whether machine learning or generative, is just another extension of the computer and is likely to become an important tool for security practitioners. But it will remain just that—a tool. As such, it will require someone to employ it, and more importantly, employ it properly and with skill. All tools are only as effective as the craftsman wielding them.

Certainly, the rise of the computer age did result in some jobs changing—we had a group of clerk typists in the Field Office when I first joined the State Department, who typed reports that were dictated by agents. Today, agents type their own reports using word processing software on computers and the clerk typists are gone. However, field offices now have teams of investigative analysts who provide support to agents by conducting database checks, analyzing data obtained from search warrants and telephone taps, etc.

Typing was mundane work. Arguably the work that investigative analysts are doing today is more interesting than working as a typist. However, some of the tasks that investigative analysts and other security personnel currently conduct can still be mundane, repetitive, and outright mind-numbing. AI can be used to assist analysts in many of these tasks, such as pouring over phone records. This saves time and frees analysts to focus on more important tasks, such as placing the results derived from AI tools into the proper context.

We have already seen how this is happening in the AI programs being used to assist security personnel monitor CCTV feeds. Rather than having a person watching dozens of screens looking for activity or suspicious behavior, AI-enabled systems can now be set to alert the console operator when there is a specific type of behavior in a specific place, thus relieving them from a great deal of tedium, and helping the operators more quickly spot and respond to problems.

Technology as a Weapon

In the late 1990s, I spent a lot of time traveling with Michael Dell. This was when e-commerce was first emerging. Michael, a strong believer in the power of e-commerce, often spoke about the revolutionary impact it was going to have on business. Certainly, time has shown how correct he was. Michael frequently said e-commerce (and by extension all technology) was like a loaded gun laying on the table—either your company was going to pick it up and use it, or you were going to do so.

When it comes to AI and security, I like to put my own twist on Michael’s analogy. For many years I ran a fencing club and taught hundreds of kids to fence. Based on my experience as a fencing instructor, I see AI as being more like a rapier than a gun. A rapier can be used to attack an opponent, but it can also be used to parry (block an attack) and to riposte (counterattack.) I believe AI will prove to be just as important a tool for defense as it is for offense—and again, its effectiveness as a defensive tool will depend on the ability of the person wielding it.

Make no mistake, an array of threat actors are already embracing AI and are seeking to use it for a variety of motives and purposes. AI deepfake photos, audios, and videos are already being used in executive fraud cases and to create reputational damage to celebrities and companies. Deepfakes will also likely be used in bomb threat and swatting cases. AI-generated bomb threats and swatting calls also have the potential to generate realistic and novel calls that can be created, modified, and broadcast at a far higher rate than someone doing so manually. AI also has the potential to present extremist influencers with the ability to rapidly create and disseminate propaganda intended to draw others to their radical causes.

Because of AI’s ability to rapidly process information, it also has the potential to make it far easier for threat actors to collect personal identifiable information (PII) on celebrities, politicians, executives, and other high-profile individuals. The rise of the internet has already made stalking someone much easier than before. AI is likely to take it to another level and enable even those lacking good internet research skills the ability to compile the information they need to begin physical surveillance of their target.

Parrying the Threat

Due to the current and emerging threats posed by AI, it becomes even more incumbent upon security leaders to embrace it and encourage their teams to explore how they can (safely) integrate AI into their operations to be used for defensive purposes.

Many of the tools security teams use for tasks like monitoring social media already utilize machine learning to make them more effective, but I also believe we need to encourage our teams to begin to experiment with generative AI to begin to master that aspect of AI.

While my team has already begun doing this, as my colleague Ben West recently wrote, it does hold promise, but also clearly has limitations. AI requires close human oversight to ensure that it is performing its tasks properly and that it is not generating false information from whole cloth.

Eventually, AI will prove effective in helping to spot deepfakes, identify patterns in AI-generated swatting and bomb threat campaigns, and help counter the propaganda and disinformation generated by threat actors. It may also help us protect the PII of our principals.

As AI matures and as both bad actors and defenders become more comfortable with using it as a tool, we will undoubtedly see it being used in ways that we can’t even begin to imagine today. Consider how hacking today has evolved compared to the early phreakers in the 1970s, and how information security has evolved in a parallel response.

Undoubtedly, it will be impossible to anticipate and protect against every attack by the bad actors who will embrace AI and use it for ill—but the battle will be far easier if we encourage our teams to learn to master AI as a defensive weapon now before it is too late to catch up.