January 5, 2017 By Christophe Veltsos 2 min read

We are the Borg. You will be assimilated. Resistance is futile.

Organizations today find themselves in a situation not unlike that of the Enterprise crew in “Star Trek.” They are facing a formidable, technologically advanced enemy capable of taking over key components of the organization. In one episode of “Star Trek,” in fact, the Borg collective takes control of Captain Jean-Luc Picard himself, to the horror of his crew.

Cognitive Security Fills In Critical Gaps

So how can organizations fight an enemy of great patience and persistence, capable of launching surgically targeted attacks and assimilating systems and people into its own collective? Machine learning is our best hope.

Machine learning is our best hope to help incident responders, security analysts and, ultimately, security leaders restore a sense of balance, a sense of peace in the universe. When the time to discover an intrusion is measured in months or years; when the people responsible for keeping track of threat vectors can’t keep up; when the incident responders can’t determine quickly and accurately enough whether an alert is a benign event or the tip of the iceberg for what could be a jaw-dropping cyber incident … where can you turn?

“Star Trek,” of course. And when you’re done binge-watching all the episodes, you can start looking into the progress that has been made in cognitive security — the application of artificial intelligence and machine learning concepts to the cybersecurity battlefront.

A recent report by IBM’s Institute for Business Value (IBV), based on a survey of 700 security leaders back here on Earth, revealed that IT teams are looking for ways to more effectively address three main shortcomings in their security response capabilities: a speed gap, an accuracy gap and an intelligence gap.

Resistance Is Futile Without Machine Learning

According to the survey, most security leaders have high hopes for cognitive security, though there will certainly be some resistance — most likely from the same folks who resisted the move to the cloud, or those who try to argue that their organizations should steer clear of investments in Internet of Things (IoT) projects.

But even this resistance will be overcome — but not by the Borg. It will be overcome by pressure from competitors, pressure from the marketplace or pressure from top leadership. Yes, machine learning can be scary; I fully realize the weight of these words knowing that in the very near future, such an intelligence might be reading this very article, processing its words, absorbing its points and counterpoints and perhaps even making sense of my analogies. However, without the assistance of cognitive computing, I don’t see a way for us to level the cybersecurity playing field.

We have no choice but to boldly go where no human has gone before — to trust artificial intelligence.

Read the full IBM report on Cybersecurity in the cognitive era

More from Artificial Intelligence

AI cybersecurity solutions detect ransomware in under 60 seconds

2 min read - Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.Criminals have been…

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today