October 30, 2018 By Douglas Bonderud 4 min read

Artificial intelligence (AI) is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push AI forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging, where AI companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage AI to defeat evolving attack methods, and recent data suggests that AI implementation could both boost gross domestic product (GDP) and generate new jobs.

It’s easy to see AI as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET, however, new increasing business expectations and misleading marketing terminology have generated significant hype around AI, to the point where 75 percent of IT decision-makers now see AI as the silver bullet for their security issues.

It’s time for an artificial intelligence reality check. What’s the hype, where’s the hope and what does effective implementation really look like?

What Are the Current Limitations of Artificial Intelligence?

AI already has a home in IT security. As noted in the Computer Weekly article, machine learning tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. What’s catching the attention of chief information security officers (CISOs) and chief information officers (CIOs) right now, however, is the prospect of AI tools that require minimal human interaction to improve network security.

This comes down to the difference between supervised and unsupervised machine learning — current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems, it’s possible for AI tools to capture and report basic system data, but beyond their scope to design intelligent threat response plans of the silver-bullet variety.

AI also has basic limitations that may be inviolate or may require a new research approach to solve. This is largely tied to experience: As noted by Pedro Domingos, professor of computer science at the University of Washington and author of “The Master Algorithm,” machines don’t learn from experience the same way humans do.

“A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch,” said Domingos, as reported by Wired.

For AI to take the next step in its evolution, Domingos argued, “we’re going to need some new ideas.”

AI Is Smart Now, But Could Be Genius in the Future

The hard truth is that AI hype is just that: hyperbole. But that doesn’t discount the current iterations of AI already used by organizations. For example, the insurance industry leverages AI to calculate risk more accurately and, in concert with Internet of Things (IoT) devices, are developing usage-based policies tailored to the individual. As noted by Quartz, meanwhile, AI technology in development for breast cancer research is now able to achieve 99 percent accuracy at 30 times the speed of humans. Brain cancer analysis got a 21 percent accuracy boost from AI while cutting diagnostic time in half.

As noted above, there’s also a government push to better utilize AI. Part of that initiative takes the form of a DARPA project called the Artificial Intelligence Exploration (AIE) program. According to Defense One, a subset of the program focuses on creating “third-wave” AI systems able to “locate new data and scientific resources, dissect them for useful information, compare those findings to existing research, and then generate new models.” This ability to effectively outsource scientific modeling could be incredibly useful for security teams: Imagine AI tools capable of sorting through historic security data, mining for actionable insights and then creating new threat models based on their findings.

Simply put, despite the hype, there’s also hope for AI. This might take the form of completely new learning paradigms or continued refinement of existing principles. Either way, smart AI does have the potential for genius-level solutions.

The Keys to Implementing AI Solutions

Beyond what AI can actually do for CISOs and CIOs looking to shore up corporate security, companies must consider implementation: How can organizations effectively deploy AI solutions to maximize results? As noted by Gallup, they can start by pulling back the curtain on the basis of artificial intelligence. According to Nara Logics CEO Jana Eggers, companies must “stop thinking AI is magic. Simply put, it’s math with more equations and computation going on.”

What does this mean for deployment? That AI isn’t a cure-all on it’s own. Instead, organizations must have a culture of security and transparency that supports the deployment of AI tools. It’s also critical to create a culture of trust within the enterprise to achieve employee buy-in. Do this by demystifying AI and making employees part of the conversation rather than outside observers. This strategy aligns with recent findings that 55 percent of security alerts detected by AI still require human supervision.

Last but not least, encourage questions. After all, that’s the eventual goal of AI: To ask hard questions and create innovative answers. Employees and C-suite members need the same freedom to question whether current deployments are working as well as possible and ask what could be done to improve AI output.

Artificial Intelligence Beyond the Hype

Emerging AI hype has convinced many organizations that it’s a silver bullet for security, but adopting current-stage technology and expecting this result puts organizations at risk. While incredibly useful with human assistance, AI in isolation is no replacement for solid information security strategy.

Still, there’s promise here. Current developments in artificial intelligence are improving speed and accuracy, while new funding is earmarked to empower more analytic capabilities. Combined with a corporate culture that supports transparency and human agency, it’s possible to maximize the existing benefits of AI and lay the groundwork for the future of machine intelligence.

Read the Ponemon Study on AI in Cybersecurity

More from Artificial Intelligence

How prepared are you for your first Gen AI disruption?

5 min read - Generative artificial intelligence (Gen AI) and its use by businesses to enhance operations and profits are the focus of innovation in virtually every sector and industry. Gartner predicts that global spending on AI software will surge from $124 billion in 2022 to $297 billion by 2027. Businesses are upskilling their teams and hiring costly experts to implement new use cases, new ways to leverage data and new ways to use open-source tooling and resources. What they have failed to look…

Brands are changing cybersecurity strategies due to AI threats

3 min read -  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.The study was conducted…

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today