I recently had the good fortune to attend the Ai4 Cybersecurity conference in New York City. This event brought together thought leaders, influencers and practitioners over two days to discuss the role of artificial intelligence (AI) and augmented intelligence in the cybersecurity industry. Here are some of the key highlights and takeaways from the event that pertain to application security.
Most People Are Struggling With Data
Whether it was the Federal Bureau of Investigation (FBI), the U.S. Department of Homeland Security (DHS), the NYC Cyber Hub, or any other organization or vendor, one thing was crystal clear: The amount of data we are all dealing with is enormous, and it is becoming more and more difficult to work with — let alone use to make sensible decisions efficiently. When you add in the fact that, for most organizations, the number of people dealing with this data tends to be fairly constant, the problem just seems that much greater. There is a definite shortage of skill in this space today. However, not all looks dark.
Nicole Eagan, CEO of Darktrace, shared the insight that in very specific usage models where people are searching for something that is exact, AI is winning over human analysts in 46 percent of cases and tying in another 40 percent. A good example of such a use case is Domain Name System (DNS) tunneling. In these cases, security teams are monitoring the size of DNS request and reply queries and using rules to block high volumes of traffic from particular sources. The point is that when there is a well-defined use case, a measurable outcome, and the necessary supporting data, AI can substantially bolster cybersecurity efforts.
In addition to the volume of data, organizations are also struggling with the kinds of data they have. Multiple Ai4 Cybersecurity sessions suggested that for AI to work well, both good and adversarial data must be provided. Consider, for instance, law enforcement agencies that only have data around people committing crimes. Without the benefit of “adversarial” data that shows what a good citizen looks like, it is difficult for AI to evaluate situations effectively.
A keynote from Erin Kenneally, portfolio manager at DHS, centered on the need to bridge the “valley of death” that exists between real-world data and processes that make use of it. Kenneally noted that in order to develop, test and evaluate against such data, it must be available, labeled, scalable and sustainable. Without these conditions, industries can only continue solving “toy problems” instead of real ones. Certain concerns around fairness, accuracy, bias and discrimination remain challenging in the current landscape as well.
These challenges regarding data amounts and types have a direct effect on application security, as much of data processing occurs at that layer. Application security testing must be able to assess and identify vulnerabilities across use cases that span massive amounts of structured and unstructured data. This need is compounded by the vast number of open-source softwares in use today. One example of a function where AI could make a substantial difference over time is the reliable correlation of changes in application code with test sets that are most likely to find vulnerabilities.
Organizations Are Looking for Credibility
There were a few panel discussions that focused on evaluating vendors and the best practices around artificiaI intelligence and machine learning. It was clear from these sessions that leaders are looking for legitimacy. There is a lot of hype and hyperbole around AI today, and more than one speaker expressed the notion that “AI is not a panacea for the world’s problems.”
A main concern during these panels was the amount of overpromising that had occurred, particularly when it came to the capabilities of AI. Statements in support of its potential to find all vulnerabilities, eliminate all false positives, etc. did more to provide skepticism. We all recognize that software has limitations, and leaders want to know what those are up front so they can make better decisions about what will work in their environments.
A number of other sessions also brought to light key use cases that are needed to enable more rapid advancement toward solving the challenges many are facing. The top use case discussed related to clients being able to assess their current levels of skill relative to current cyberthreats and responses correctly.
Other use cases that saw considerable discussion are as follows:
- How do we effectively utilize the skill we already have today? This is primarily a question of efficiency, but it also pertains to identifying and addressing gaps.
- How do we assess and manage our software inventory? We need to know:
- What is being monitored
- What isn’t being monitored
- What could “catch us out” and potentially pose serious problems
- How do we increase analyst productivity?
- How do we evaluate and validate conclusions made by AI systems? In other words, how do we know we can trust the technology?
Collaboration Is Key to AI Success
Artificial intelligence is only as good as the systems used to train it, and proper training requires lots of data — both good and bad. A common thread through nearly every session was that, while the amount of data each organization is exposed to is growing exponentially, the amount of data they can currently work with is actually rather limited, especially when it comes to development and testing.
Many organizations are relying on the same data sources for these purposes, and the challenge with this approach is that it does not address more unconventional approaches or attacks, which are becoming more common. In addition, many companies are lacking the necessary adversarial data to train their AI systems to recognize patterns and trends properly.
Finally, when one considers the plethora of application programming interfaces, microservices, containers and delivery systems in place today, it becomes clear that the attack surface is large and diverse. This is enabling threat actors to infiltrate organizations’ systems. Addressing cybersecurity needs is a concern that affects the whole of modern business, but the current tendency of most firms is to guard their data closely and avoid sharing. If we want to take full advantage of AI’s capabilities, we are going to have to find ways to share data and insights appropriately so they can be used to improve the technology.
While artificial intelligence in cybersecurity is still in its infancy, the inaugural Ai4 Cybersecurity conference was a great first step in advancing the broader discussion, especially as it relates to application security. This is an area that is sure to see continued growth, and I expect many similar discussions to take place in the coming years. What we can say for sure is that being able to define clear use cases, build credibility and collaborate well are early keys to success, and more are sure to develop as these discussions progress.
Worldwide Application Security Evangelist