Listen to this podcast on iTunes, Soundcloud or wherever you find your favorite audio content.
Artificial intelligence (AI) has been making headlines for several years now, but what’s the story behind the hype? And what opportunities and risks does AI present for the security industry in particular?
In this podcast episode we’re demystifying AI in cybersecurity with the help of three IBM experts: Carma Austin, Worldwide Sales Leader, Security Intelligence SaaS; Doug Lhotka, Executive CyberSecurity Architect, CISSP-ISSAP; and Jeff Crume, IT Security Architect, Distinguished Engineer and IBM Master Inventor.
What Does AI Really Mean?
Lhotka is quick to point out that many companies are “getting out over their skis” when it comes to AI — they’re overextending the concept and their mastery of it in order to commercialize a still developing technology.
Lhotka says AI is the simulation of human thinking in a machine. For IBM, this means mastering the “big three” skills: mining data, recognizing patterns and understanding natural language.
Crume adds that AI is a set of technologies rather than a single solution. He recommends thinking of the concept like a Venn diagram with multiple connections and subsets. Austin, meanwhile, emphasizes that AI is an architecture, and that not all AI is created equal.
Listen to the podcast
How IBM Is Approaching Machine Learning
According to Crume, most companies aren’t actually utilizing AI. Instead, they’re tackling smaller subsets of the technology.
For IBM, one area of focus is using machine learning to analyze both structured and unstructured cybersecurity datasets. Indeed, the game-changer is creating AI tools that can comprehend unstructured data, such as blogs and documents meant for human consumption, and leveraging them to create statistical models. Austin points to QRadar with Watson, which is focused on “assessing and interpreting incident data,” as approach to practical machine leaning implementation.
Key Challenges and AI Threats
Companies today are up against threat actors who want to compromise AI environments. Security teams must ensure that their IT environments are clean before building out machine learning structures, or they could find themselves facing serious AI threats.
As Lhotka notes, seasonal business cycles can also be challenging. For example, if AI tools benchmark central processing unit (CPU) utilization during the summer months for retail companies, they may set off alarms in November and December when purchase volumes — and CPU stress — goes through the roof. Next-generation AI tools are expected to take this cycle into account.
The Future of AI in Cybersecurity
Despite still being in its relatively early stages, AI does already have a real-world impact. According to Austin, QRadar helped one company accelerate its analysis process by 50 percent, and Lhotka points to the potential of machine learning in transitioning from DevOps to DevSecOps by reducing false positives in source code security scanning.
Still, as Crume makes clear, there’s more work to do: Since the aim of AI in cybersecurity is to simulate human learning, realizing the full impact of this technology requires ongoing training, correction and innovation.
Learn more about IBM QRadar Advisor with Watson
If you enjoyed listening, please consider rating the podcast or leaving your feedback on iTunes or wherever you listen.