October 30, 2018 By Douglas Bonderud 4 min read

Artificial intelligence (AI) is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push AI forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging, where AI companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage AI to defeat evolving attack methods, and recent data suggests that AI implementation could both boost gross domestic product (GDP) and generate new jobs.

It’s easy to see AI as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET, however, new increasing business expectations and misleading marketing terminology have generated significant hype around AI, to the point where 75 percent of IT decision-makers now see AI as the silver bullet for their security issues.

It’s time for an artificial intelligence reality check. What’s the hype, where’s the hope and what does effective implementation really look like?

What Are the Current Limitations of Artificial Intelligence?

AI already has a home in IT security. As noted in the Computer Weekly article, machine learning tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. What’s catching the attention of chief information security officers (CISOs) and chief information officers (CIOs) right now, however, is the prospect of AI tools that require minimal human interaction to improve network security.

This comes down to the difference between supervised and unsupervised machine learning — current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems, it’s possible for AI tools to capture and report basic system data, but beyond their scope to design intelligent threat response plans of the silver-bullet variety.

AI also has basic limitations that may be inviolate or may require a new research approach to solve. This is largely tied to experience: As noted by Pedro Domingos, professor of computer science at the University of Washington and author of “The Master Algorithm,” machines don’t learn from experience the same way humans do.

“A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch,” said Domingos, as reported by Wired.

For AI to take the next step in its evolution, Domingos argued, “we’re going to need some new ideas.”

AI Is Smart Now, But Could Be Genius in the Future

The hard truth is that AI hype is just that: hyperbole. But that doesn’t discount the current iterations of AI already used by organizations. For example, the insurance industry leverages AI to calculate risk more accurately and, in concert with Internet of Things (IoT) devices, are developing usage-based policies tailored to the individual. As noted by Quartz, meanwhile, AI technology in development for breast cancer research is now able to achieve 99 percent accuracy at 30 times the speed of humans. Brain cancer analysis got a 21 percent accuracy boost from AI while cutting diagnostic time in half.

As noted above, there’s also a government push to better utilize AI. Part of that initiative takes the form of a DARPA project called the Artificial Intelligence Exploration (AIE) program. According to Defense One, a subset of the program focuses on creating “third-wave” AI systems able to “locate new data and scientific resources, dissect them for useful information, compare those findings to existing research, and then generate new models.” This ability to effectively outsource scientific modeling could be incredibly useful for security teams: Imagine AI tools capable of sorting through historic security data, mining for actionable insights and then creating new threat models based on their findings.

Simply put, despite the hype, there’s also hope for AI. This might take the form of completely new learning paradigms or continued refinement of existing principles. Either way, smart AI does have the potential for genius-level solutions.

The Keys to Implementing AI Solutions

Beyond what AI can actually do for CISOs and CIOs looking to shore up corporate security, companies must consider implementation: How can organizations effectively deploy AI solutions to maximize results? As noted by Gallup, they can start by pulling back the curtain on the basis of artificial intelligence. According to Nara Logics CEO Jana Eggers, companies must “stop thinking AI is magic. Simply put, it’s math with more equations and computation going on.”

What does this mean for deployment? That AI isn’t a cure-all on it’s own. Instead, organizations must have a culture of security and transparency that supports the deployment of AI tools. It’s also critical to create a culture of trust within the enterprise to achieve employee buy-in. Do this by demystifying AI and making employees part of the conversation rather than outside observers. This strategy aligns with recent findings that 55 percent of security alerts detected by AI still require human supervision.

Last but not least, encourage questions. After all, that’s the eventual goal of AI: To ask hard questions and create innovative answers. Employees and C-suite members need the same freedom to question whether current deployments are working as well as possible and ask what could be done to improve AI output.

Artificial Intelligence Beyond the Hype

Emerging AI hype has convinced many organizations that it’s a silver bullet for security, but adopting current-stage technology and expecting this result puts organizations at risk. While incredibly useful with human assistance, AI in isolation is no replacement for solid information security strategy.

Still, there’s promise here. Current developments in artificial intelligence are improving speed and accuracy, while new funding is earmarked to empower more analytic capabilities. Combined with a corporate culture that supports transparency and human agency, it’s possible to maximize the existing benefits of AI and lay the groundwork for the future of machine intelligence.

Read the Ponemon Study on AI in Cybersecurity

More from Artificial Intelligence

How a new wave of deepfake-driven cybercrime targets businesses

5 min read - As deepfake attacks on businesses dominate news headlines, detection experts are gathering valuable insights into how these attacks came into being and the vulnerabilities they exploit.Between 2023 and 2024, frequent phishing and social engineering campaigns led to account hijacking and theft of assets and data, identity theft, and reputational damage to businesses across industries.Call centers of major banks and financial institutions are now overwhelmed by an onslaught of deepfake calls using voice cloning technology in efforts to break into customer…

Overheard at RSA Conference 2024: Top trends cybersecurity experts are talking about

4 min read - At a brunch roundtable, one of the many informal events held during the RSA Conference 2024 (RSAC), the conversation turned to the most popular trends and themes at this year’s events. There was no disagreement in what people presenting sessions or companies on the Expo show floor were talking about: RSAC 2024 is all about artificial intelligence (or as one CISO said, “It’s not RSAC; it’s RSAI”). The chatter around AI shouldn’t have been a surprise to anyone who attended…

3 recommendations for adopting generative AI for cyber defense

3 min read - In the past eighteen months, generative AI (gen AI) has gone from being the source of jaw-dropping demos to a top strategic priority in nearly every industry. A majority of CEOs report feeling under pressure to invest in gen AI. Product teams are now scrambling to build gen AI into their solutions and services. The EU and US are beginning to put new regulatory frameworks in place to manage AI risks.Amid all this commotion, hackers and other cybercriminals are hardly…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today