Artificial intelligence (AI) is coming soon to a network near you. Limited forms of AI are already in use, and much more powerful applications are now in development. That means there’s no better time to start thinking about the implications of AI on cybersecurity.

Artificial Intelligence Evolves

Speculation about AI in the form of robots has been popular for generations, dating back well before pioneering digital computers to the giant electronic brains of the 1940s. From the very beginning, this speculation has included worries about the dangers that might be posed by malicious or mistaken robots. With AI becoming a reality, its potential risks and benefits are no longer mere speculation.

When people started thinking about artificial intelligence, they had only one point of reference to go by: human intelligence. Whether robots were made to look vaguely human, they were imagined as thinking and feeling more or less the way we do. In the novel “2001: A Space Odyssey,” author Arthur C. Clarke portrayed HAL 9000 as driven insane by the emotional stresses of Cold War-style deception.

But AI in real life has developed in an entirely different way. For example, it was once assumed that any computer able to play champion-level chess would need to think about the game the way humans do. In fact, we still do not understand how top human players play so well — and computers beat them anyway. They use the brute-force capability of testing millions of possible moves, something no human can do, to find the best possible option.

Thus, as Michael Chorost pointed out at Slate, AIs are not subject to emotional strains or complications from mixed motivations because they have no emotions or motivations of any sort.

Emulating Human Intelligence or Concentrating Human Intelligence?

Instead of emulating human intelligence, it could be said that real-world AI concentrates human intelligence in the same way that a lever concentrates the user’s strength onto the desired task.

In fact, AI has much in common with institutional intelligence. Everyone loves to hate bureaucracy, but organizations can and do display intelligent behavior, expressed by characteristics such as institutional memory and institutional learning.

If you propose an idea at a meeting and your colleagues agree to go with it, congratulations! You have just contributed to institutional intelligence. The AI of tomorrow may well be a sort of automated organization, with both human and electronic members contributing to overall intelligence.

Work on composite human-machine intelligence is already focusing specifically on network security issues. As Naked Security reported, MIT researchers are working on a system that combines human experts and machine learning to achieve a threefold improvement in threat detection combined with a fivefold reduction in false positives.

The system — called AI2 because it combines artificial intelligence with (human) analyst intuition — looks for patterns, which it then presents to its human partners for evaluation. Those human insights improve the machine’s ability to ignore nonthreat patterns while still warning of potentially dangerous ones.

A Whole New Meaning of ‘Trusted Users’

Once human social learning is added to the AI mix, a new and subtle security challenge emerges. A leading security threat today is social engineering, such as spear phishing, which tricks users into making security mistakes. Social learning for AIs introduces the risk that malicious teachers could trick the AI or even subvert it into helping attackers.

The recent mishap of the Tay chatbot gives a hint of the potential risks. According to The Loop, Tay was designed to engage in innocent online small talk. But human Internet trolls soon attacked it and essentially taught it to behave like an Internet troll. Tay was quickly taken offline for further development.

Any social learning AI is potentially vulnerable to this type of attack. Designers need to ensure that only trusted teachers have access to the AI, particularly in the critical initial stages of learning before the AI has been taught to be wary of suspicious lessons. The risks can come not only from deliberately malicious users, but also from careless ones who could inadvertently teach the wrong lessons. If the AI resembles AI2 in being designed for security tasks, the challenge of identifying trusted users is even more critical.

Can we achieve this level of user security? As applied to AI, the challenge is a new one, but it is really the oldest security challenge of all —as the Latin proverb asks, “Quis custodiet ipsos custodes?” Who will guard the guards themselves?

All security is ultimately about human trust. This will not change, even as we enlist AIs to be our security partners.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today