Artificial intelligence (AI) is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push AI forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging, where AI companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage AI to defeat evolving attack methods, and recent data suggests that AI implementation could both boost gross domestic product (GDP) and generate new jobs.

It’s easy to see AI as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET, however, new increasing business expectations and misleading marketing terminology have generated significant hype around AI, to the point where 75 percent of IT decision-makers now see AI as the silver bullet for their security issues.

It’s time for an artificial intelligence reality check. What’s the hype, where’s the hope and what does effective implementation really look like?

What Are the Current Limitations of Artificial Intelligence?

AI already has a home in IT security. As noted in the Computer Weekly article, machine learning tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. What’s catching the attention of chief information security officers (CISOs) and chief information officers (CIOs) right now, however, is the prospect of AI tools that require minimal human interaction to improve network security.

This comes down to the difference between supervised and unsupervised machine learning — current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems, it’s possible for AI tools to capture and report basic system data, but beyond their scope to design intelligent threat response plans of the silver-bullet variety.

AI also has basic limitations that may be inviolate or may require a new research approach to solve. This is largely tied to experience: As noted by Pedro Domingos, professor of computer science at the University of Washington and author of “The Master Algorithm,” machines don’t learn from experience the same way humans do.

“A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch,” said Domingos, as reported by Wired.

For AI to take the next step in its evolution, Domingos argued, “we’re going to need some new ideas.”

AI Is Smart Now, But Could Be Genius in the Future

The hard truth is that AI hype is just that: hyperbole. But that doesn’t discount the current iterations of AI already used by organizations. For example, the insurance industry leverages AI to calculate risk more accurately and, in concert with Internet of Things (IoT) devices, are developing usage-based policies tailored to the individual. As noted by Quartz, meanwhile, AI technology in development for breast cancer research is now able to achieve 99 percent accuracy at 30 times the speed of humans. Brain cancer analysis got a 21 percent accuracy boost from AI while cutting diagnostic time in half.

As noted above, there’s also a government push to better utilize AI. Part of that initiative takes the form of a DARPA project called the Artificial Intelligence Exploration (AIE) program. According to Defense One, a subset of the program focuses on creating “third-wave” AI systems able to “locate new data and scientific resources, dissect them for useful information, compare those findings to existing research, and then generate new models.” This ability to effectively outsource scientific modeling could be incredibly useful for security teams: Imagine AI tools capable of sorting through historic security data, mining for actionable insights and then creating new threat models based on their findings.

Simply put, despite the hype, there’s also hope for AI. This might take the form of completely new learning paradigms or continued refinement of existing principles. Either way, smart AI does have the potential for genius-level solutions.

The Keys to Implementing AI Solutions

Beyond what AI can actually do for CISOs and CIOs looking to shore up corporate security, companies must consider implementation: How can organizations effectively deploy AI solutions to maximize results? As noted by Gallup, they can start by pulling back the curtain on the basis of artificial intelligence. According to Nara Logics CEO Jana Eggers, companies must “stop thinking AI is magic. Simply put, it’s math with more equations and computation going on.”

What does this mean for deployment? That AI isn’t a cure-all on it’s own. Instead, organizations must have a culture of security and transparency that supports the deployment of AI tools. It’s also critical to create a culture of trust within the enterprise to achieve employee buy-in. Do this by demystifying AI and making employees part of the conversation rather than outside observers. This strategy aligns with recent findings that 55 percent of security alerts detected by AI still require human supervision.

Last but not least, encourage questions. After all, that’s the eventual goal of AI: To ask hard questions and create innovative answers. Employees and C-suite members need the same freedom to question whether current deployments are working as well as possible and ask what could be done to improve AI output.

Artificial Intelligence Beyond the Hype

Emerging AI hype has convinced many organizations that it’s a silver bullet for security, but adopting current-stage technology and expecting this result puts organizations at risk. While incredibly useful with human assistance, AI in isolation is no replacement for solid information security strategy.

Still, there’s promise here. Current developments in artificial intelligence are improving speed and accuracy, while new funding is earmarked to empower more analytic capabilities. Combined with a corporate culture that supports transparency and human agency, it’s possible to maximize the existing benefits of AI and lay the groundwork for the future of machine intelligence.

Read the Ponemon Study on AI in Cybersecurity

More from Artificial Intelligence

Now Social Engineering Attackers Have AI. Do You? 

4 min read - Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code. The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else. How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks? When Every Social Engineering Attack Uses Perfect English ChatGPT is a public tool based on a…

4 min read

Can Large Language Models Boost Your Security Posture?

4 min read - The threat landscape is expanding, and regulatory requirements are multiplying. For the enterprise, the challenges just to keep up are only mounting. In addition, there’s the cybersecurity skills gap. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, which means 3.4 million more workers are needed to help protect data and prevent threats. Leveraging AI-based tools is unquestionably necessary for modern organizations. But how far can tools like ChatGPT take us with…

4 min read

Why Robot Vacuums Have Cameras (and What to Know About Them)

4 min read - Robot vacuum cleaner products are by far the largest category of consumer robots. They roll around on floors, hoovering up dust and dirt so we don’t have to, all while avoiding obstacles. The industry leader, iRobot, has been cleaning up the robot vacuum market for two decades. Over this time, the company has steadily gained fans and a sterling reputation, including around security and privacy. And then, something shocking happened. Someone posted on Facebook a picture of a woman sitting…

4 min read

ChatGPT Confirms Data Breach, Raising Security Concerns

4 min read - When ChatGPT and similar chatbots first became widely available, the concern in the cybersecurity world was how AI technology could be used to launch cyberattacks. In fact, it didn’t take very long until threat actors figured out how to bypass the safety checks to use ChatGPT to write malicious code. It now seems that the tables have turned. Instead of attackers using ChatGPT to cause cyber incidents, they have now turned on the technology itself. OpenAI, which developed the chatbot,…

4 min read