Artificial intelligence (AI) is coming soon to a network near you. Limited forms of AI are already in use, and much more powerful applications are now in development. That means there’s no better time to start thinking about the implications of AI on cybersecurity.

Artificial Intelligence Evolves

Speculation about AI in the form of robots has been popular for generations, dating back well before pioneering digital computers to the giant electronic brains of the 1940s. From the very beginning, this speculation has included worries about the dangers that might be posed by malicious or mistaken robots. With AI becoming a reality, its potential risks and benefits are no longer mere speculation.

When people started thinking about artificial intelligence, they had only one point of reference to go by: human intelligence. Whether robots were made to look vaguely human, they were imagined as thinking and feeling more or less the way we do. In the novel “2001: A Space Odyssey,” author Arthur C. Clarke portrayed HAL 9000 as driven insane by the emotional stresses of Cold War-style deception.

But AI in real life has developed in an entirely different way. For example, it was once assumed that any computer able to play champion-level chess would need to think about the game the way humans do. In fact, we still do not understand how top human players play so well — and computers beat them anyway. They use the brute-force capability of testing millions of possible moves, something no human can do, to find the best possible option.

Thus, as Michael Chorost pointed out at Slate, AIs are not subject to emotional strains or complications from mixed motivations because they have no emotions or motivations of any sort.

Emulating Human Intelligence or Concentrating Human Intelligence?

Instead of emulating human intelligence, it could be said that real-world AI concentrates human intelligence in the same way that a lever concentrates the user’s strength onto the desired task.

In fact, AI has much in common with institutional intelligence. Everyone loves to hate bureaucracy, but organizations can and do display intelligent behavior, expressed by characteristics such as institutional memory and institutional learning.

If you propose an idea at a meeting and your colleagues agree to go with it, congratulations! You have just contributed to institutional intelligence. The AI of tomorrow may well be a sort of automated organization, with both human and electronic members contributing to overall intelligence.

Work on composite human-machine intelligence is already focusing specifically on network security issues. As Naked Security reported, MIT researchers are working on a system that combines human experts and machine learning to achieve a threefold improvement in threat detection combined with a fivefold reduction in false positives.

The system — called AI2 because it combines artificial intelligence with (human) analyst intuition — looks for patterns, which it then presents to its human partners for evaluation. Those human insights improve the machine’s ability to ignore nonthreat patterns while still warning of potentially dangerous ones.

A Whole New Meaning of ‘Trusted Users’

Once human social learning is added to the AI mix, a new and subtle security challenge emerges. A leading security threat today is social engineering, such as spear phishing, which tricks users into making security mistakes. Social learning for AIs introduces the risk that malicious teachers could trick the AI or even subvert it into helping attackers.

The recent mishap of the Tay chatbot gives a hint of the potential risks. According to The Loop, Tay was designed to engage in innocent online small talk. But human Internet trolls soon attacked it and essentially taught it to behave like an Internet troll. Tay was quickly taken offline for further development.

Any social learning AI is potentially vulnerable to this type of attack. Designers need to ensure that only trusted teachers have access to the AI, particularly in the critical initial stages of learning before the AI has been taught to be wary of suspicious lessons. The risks can come not only from deliberately malicious users, but also from careless ones who could inadvertently teach the wrong lessons. If the AI resembles AI2 in being designed for security tasks, the challenge of identifying trusted users is even more critical.

Can we achieve this level of user security? As applied to AI, the challenge is a new one, but it is really the oldest security challenge of all —as the Latin proverb asks, “Quis custodiet ipsos custodes?” Who will guard the guards themselves?

All security is ultimately about human trust. This will not change, even as we enlist AIs to be our security partners.

More from Artificial Intelligence

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Back to basics: Better security in the AI era

4 min read - The rise of artificial intelligence (AI), large language models (LLM) and IoT solutions has created a new security landscape. From generative AI tools that can be taught to create malicious code to the exploitation of connected devices as a way for attackers to move laterally across networks, enterprise IT teams find themselves constantly running to catch up. According to the Google Cloud Cybersecurity Forecast 2024 report, companies should anticipate a surge in attacks powered by generative AI tools and LLMs…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today