January 5, 2017 By Christophe Veltsos 2 min read

We are the Borg. You will be assimilated. Resistance is futile.

Organizations today find themselves in a situation not unlike that of the Enterprise crew in “Star Trek.” They are facing a formidable, technologically advanced enemy capable of taking over key components of the organization. In one episode of “Star Trek,” in fact, the Borg collective takes control of Captain Jean-Luc Picard himself, to the horror of his crew.

Cognitive Security Fills In Critical Gaps

So how can organizations fight an enemy of great patience and persistence, capable of launching surgically targeted attacks and assimilating systems and people into its own collective? Machine learning is our best hope.

Machine learning is our best hope to help incident responders, security analysts and, ultimately, security leaders restore a sense of balance, a sense of peace in the universe. When the time to discover an intrusion is measured in months or years; when the people responsible for keeping track of threat vectors can’t keep up; when the incident responders can’t determine quickly and accurately enough whether an alert is a benign event or the tip of the iceberg for what could be a jaw-dropping cyber incident … where can you turn?

“Star Trek,” of course. And when you’re done binge-watching all the episodes, you can start looking into the progress that has been made in cognitive security — the application of artificial intelligence and machine learning concepts to the cybersecurity battlefront.

A recent report by IBM’s Institute for Business Value (IBV), based on a survey of 700 security leaders back here on Earth, revealed that IT teams are looking for ways to more effectively address three main shortcomings in their security response capabilities: a speed gap, an accuracy gap and an intelligence gap.

Resistance Is Futile Without Machine Learning

According to the survey, most security leaders have high hopes for cognitive security, though there will certainly be some resistance — most likely from the same folks who resisted the move to the cloud, or those who try to argue that their organizations should steer clear of investments in Internet of Things (IoT) projects.

But even this resistance will be overcome — but not by the Borg. It will be overcome by pressure from competitors, pressure from the marketplace or pressure from top leadership. Yes, machine learning can be scary; I fully realize the weight of these words knowing that in the very near future, such an intelligence might be reading this very article, processing its words, absorbing its points and counterpoints and perhaps even making sense of my analogies. However, without the assistance of cognitive computing, I don’t see a way for us to level the cybersecurity playing field.

We have no choice but to boldly go where no human has gone before — to trust artificial intelligence.

Read the full IBM report on Cybersecurity in the cognitive era

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today