I recently had the good fortune to attend the Ai4 Cybersecurity conference in New York City. This event brought together thought leaders, influencers and practitioners over two days to discuss the role of artificial intelligence (AI) and augmented intelligence in the cybersecurity industry. Here are some of the key highlights and takeaways from the event that pertain to application security.

Most People Are Struggling With Data

Whether it was the Federal Bureau of Investigation (FBI), the U.S. Department of Homeland Security (DHS), the NYC Cyber Hub, or any other organization or vendor, one thing was crystal clear: The amount of data we are all dealing with is enormous, and it is becoming more and more difficult to work with — let alone use to make sensible decisions efficiently. When you add in the fact that, for most organizations, the number of people dealing with this data tends to be fairly constant, the problem just seems that much greater. There is a definite shortage of skill in this space today. However, not all looks dark.

Nicole Eagan, CEO of Darktrace, shared the insight that in very specific usage models where people are searching for something that is exact, AI is winning over human analysts in 46 percent of cases and tying in another 40 percent. A good example of such a use case is Domain Name System (DNS) tunneling. In these cases, security teams are monitoring the size of DNS request and reply queries and using rules to block high volumes of traffic from particular sources. The point is that when there is a well-defined use case, a measurable outcome, and the necessary supporting data, AI can substantially bolster cybersecurity efforts.

In addition to the volume of data, organizations are also struggling with the kinds of data they have. Multiple Ai4 Cybersecurity sessions suggested that for AI to work well, both good and adversarial data must be provided. Consider, for instance, law enforcement agencies that only have data around people committing crimes. Without the benefit of “adversarial” data that shows what a good citizen looks like, it is difficult for AI to evaluate situations effectively.

A keynote from Erin Kenneally, portfolio manager at DHS, centered on the need to bridge the “valley of death” that exists between real-world data and processes that make use of it. Kenneally noted that in order to develop, test and evaluate against such data, it must be available, labeled, scalable and sustainable. Without these conditions, industries can only continue solving “toy problems” instead of real ones. Certain concerns around fairness, accuracy, bias and discrimination remain challenging in the current landscape as well.

These challenges regarding data amounts and types have a direct effect on application security, as much of data processing occurs at that layer. Application security testing must be able to assess and identify vulnerabilities across use cases that span massive amounts of structured and unstructured data. This need is compounded by the vast number of open-source softwares in use today. One example of a function where AI could make a substantial difference over time is the reliable correlation of changes in application code with test sets that are most likely to find vulnerabilities.

Organizations Are Looking for Credibility

There were a few panel discussions that focused on evaluating vendors and the best practices around artificiaI intelligence and machine learning. It was clear from these sessions that leaders are looking for legitimacy. There is a lot of hype and hyperbole around AI today, and more than one speaker expressed the notion that “AI is not a panacea for the world’s problems.”

A main concern during these panels was the amount of overpromising that had occurred, particularly when it came to the capabilities of AI. Statements in support of its potential to find all vulnerabilities, eliminate all false positives, etc. did more to provide skepticism. We all recognize that software has limitations, and leaders want to know what those are up front so they can make better decisions about what will work in their environments.

A number of other sessions also brought to light key use cases that are needed to enable more rapid advancement toward solving the challenges many are facing. The top use case discussed related to clients being able to assess their current levels of skill relative to current cyberthreats and responses correctly.

Other use cases that saw considerable discussion are as follows:

  • How do we effectively utilize the skill we already have today? This is primarily a question of efficiency, but it also pertains to identifying and addressing gaps.
  • How do we assess and manage our software inventory? We need to know:
    • What is being monitored
    • What isn’t being monitored
    • What could “catch us out” and potentially pose serious problems
  • How do we increase analyst productivity?
  • How do we evaluate and validate conclusions made by AI systems? In other words, how do we know we can trust the technology?

Collaboration Is Key to AI Success

Artificial intelligence is only as good as the systems used to train it, and proper training requires lots of data — both good and bad. A common thread through nearly every session was that, while the amount of data each organization is exposed to is growing exponentially, the amount of data they can currently work with is actually rather limited, especially when it comes to development and testing.

Many organizations are relying on the same data sources for these purposes, and the challenge with this approach is that it does not address more unconventional approaches or attacks, which are becoming more common. In addition, many companies are lacking the necessary adversarial data to train their AI systems to recognize patterns and trends properly.

Finally, when one considers the plethora of application programming interfaces, microservices, containers and delivery systems in place today, it becomes clear that the attack surface is large and diverse. This is enabling threat actors to infiltrate organizations’ systems. Addressing cybersecurity needs is a concern that affects the whole of modern business, but the current tendency of most firms is to guard their data closely and avoid sharing. If we want to take full advantage of AI’s capabilities, we are going to have to find ways to share data and insights appropriately so they can be used to improve the technology.

While artificial intelligence in cybersecurity is still in its infancy, the inaugural Ai4 Cybersecurity conference was a great first step in advancing the broader discussion, especially as it relates to application security. This is an area that is sure to see continued growth, and I expect many similar discussions to take place in the coming years. What we can say for sure is that being able to define clear use cases, build credibility and collaborate well are early keys to success, and more are sure to develop as these discussions progress.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today