Authored by David Shipley, Director of Strategic Initiatives, Information Technology Services, University of New Brunswick.

Embracing Cognitive Security Solutions

In many organizations, security is assumed rather than actively pursued. It is my job to make sure that isn’t the case. As the data center for three other universities in our province, my security team at the University of New Brunswick (UNB) protects a large digital bank of information with a fraction of the security resources of larger organizations. We have to protect student records, proprietary research material and other assets that criminals value highly.

A university is like the Mos Eisley spaceport of cybersecurity. We have every bad thing you could imagine: malware, vulnerable devices, patching issues and bring-your-own-device (BYOD) everywhere. We are, by our nature, open and transparent, yet we are supposed to be secure. Those two things do not go well together; we exist in that uncomfortable friction. Because of that, however, we are the perfect breeding ground for new ideas.

After the Gold Rush

We are faced with an exponentially growing volume of attacks due to the proliferation of new tools for cybercriminals. Today, the barriers to entry for cybercrime are tremendously low, creating a kind of gold rush. I feel this is due to a number of different factors, including the lack of a real, global cybercrime framework and national policing resources to address incidents and attacks. I am also worried about the amount of money that cybercriminals are obtaining to reinvest into their capabilities, widening the gap between the attackers and the attacked.

We are outgunned and need new capabilities to use as force multipliers to level the playing field with cybercriminals. UNB is exploring cognitive security solutions with IBM to augment our capabilities to deal with these challenges. UNB is one of eight universities in North America chosen by IBM to help adapt Watson cognitive technology for use in the cybersecurity battle. We are feeding real data into the Watson system as a natural extension of the work we are doing for security information and event management (SIEM).

Stop Fighting Fires

We have high expectations for cognitive security solutions in the coming years. The technology has so much potential to address our labor shortage gap, reduce our risk profile and increase our efficiency of response.

Cognitive systems can leverage unstructured data to provide the context behind attacks and provide an informed second opinion to increase our confidence for making decisions. I read a lot on a daily basis, but that might help me discover roughly 1 percent of what is out there in terms of the latest threats and risks at any given time. How am I supposed to apply only 1 percent against hundreds of active offenses on a daily basis? I hope cognitive security solutions can enable me to take a more holistic view of my cybersecurity situation.

Ultimately, I believe that these Watson-based solutions will allow security professionals to move to a higher level of value for their organizations. Cognitive solutions can help them get away from merely firefighting and into tackling longer-term strategic issues, such as user behavior and organizational culture, that can change the outcome of the present one-sided battle.

Read the IBM Executive Report: Cybersecurity in the cognitive era

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today