October 30, 2018 By Douglas Bonderud 4 min read

Artificial intelligence (AI) is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push AI forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging, where AI companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage AI to defeat evolving attack methods, and recent data suggests that AI implementation could both boost gross domestic product (GDP) and generate new jobs.

It’s easy to see AI as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET, however, new increasing business expectations and misleading marketing terminology have generated significant hype around AI, to the point where 75 percent of IT decision-makers now see AI as the silver bullet for their security issues.

It’s time for an artificial intelligence reality check. What’s the hype, where’s the hope and what does effective implementation really look like?

What Are the Current Limitations of Artificial Intelligence?

AI already has a home in IT security. As noted in the Computer Weekly article, machine learning tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. What’s catching the attention of chief information security officers (CISOs) and chief information officers (CIOs) right now, however, is the prospect of AI tools that require minimal human interaction to improve network security.

This comes down to the difference between supervised and unsupervised machine learning — current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems, it’s possible for AI tools to capture and report basic system data, but beyond their scope to design intelligent threat response plans of the silver-bullet variety.

AI also has basic limitations that may be inviolate or may require a new research approach to solve. This is largely tied to experience: As noted by Pedro Domingos, professor of computer science at the University of Washington and author of “The Master Algorithm,” machines don’t learn from experience the same way humans do.

“A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch,” said Domingos, as reported by Wired.

For AI to take the next step in its evolution, Domingos argued, “we’re going to need some new ideas.”

AI Is Smart Now, But Could Be Genius in the Future

The hard truth is that AI hype is just that: hyperbole. But that doesn’t discount the current iterations of AI already used by organizations. For example, the insurance industry leverages AI to calculate risk more accurately and, in concert with Internet of Things (IoT) devices, are developing usage-based policies tailored to the individual. As noted by Quartz, meanwhile, AI technology in development for breast cancer research is now able to achieve 99 percent accuracy at 30 times the speed of humans. Brain cancer analysis got a 21 percent accuracy boost from AI while cutting diagnostic time in half.

As noted above, there’s also a government push to better utilize AI. Part of that initiative takes the form of a DARPA project called the Artificial Intelligence Exploration (AIE) program. According to Defense One, a subset of the program focuses on creating “third-wave” AI systems able to “locate new data and scientific resources, dissect them for useful information, compare those findings to existing research, and then generate new models.” This ability to effectively outsource scientific modeling could be incredibly useful for security teams: Imagine AI tools capable of sorting through historic security data, mining for actionable insights and then creating new threat models based on their findings.

Simply put, despite the hype, there’s also hope for AI. This might take the form of completely new learning paradigms or continued refinement of existing principles. Either way, smart AI does have the potential for genius-level solutions.

The Keys to Implementing AI Solutions

Beyond what AI can actually do for CISOs and CIOs looking to shore up corporate security, companies must consider implementation: How can organizations effectively deploy AI solutions to maximize results? As noted by Gallup, they can start by pulling back the curtain on the basis of artificial intelligence. According to Nara Logics CEO Jana Eggers, companies must “stop thinking AI is magic. Simply put, it’s math with more equations and computation going on.”

What does this mean for deployment? That AI isn’t a cure-all on it’s own. Instead, organizations must have a culture of security and transparency that supports the deployment of AI tools. It’s also critical to create a culture of trust within the enterprise to achieve employee buy-in. Do this by demystifying AI and making employees part of the conversation rather than outside observers. This strategy aligns with recent findings that 55 percent of security alerts detected by AI still require human supervision.

Last but not least, encourage questions. After all, that’s the eventual goal of AI: To ask hard questions and create innovative answers. Employees and C-suite members need the same freedom to question whether current deployments are working as well as possible and ask what could be done to improve AI output.

Artificial Intelligence Beyond the Hype

Emerging AI hype has convinced many organizations that it’s a silver bullet for security, but adopting current-stage technology and expecting this result puts organizations at risk. While incredibly useful with human assistance, AI in isolation is no replacement for solid information security strategy.

Still, there’s promise here. Current developments in artificial intelligence are improving speed and accuracy, while new funding is earmarked to empower more analytic capabilities. Combined with a corporate culture that supports transparency and human agency, it’s possible to maximize the existing benefits of AI and lay the groundwork for the future of machine intelligence.

Read the Ponemon Study on AI in Cybersecurity

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today