June 26, 2019 By Kacy Zurkus 3 min read

Traditionally, information security has been about protecting the network against external threats. As innovation and the cloud have slowly chipped away at the perimeter, however, organizations have become challenged to defend against not only nefarious actors from the outside, but also malicious insiders within the company’s digital walls.

“With the evolution of modern techniques and exploitation of the end user, we are on the cusp of a new world where most threats resemble or leverage the insider one way or another, willingly or unwillingly,” said Adrian Peters, board member of the Internet Security Alliance. According to Peters, because convergence is driving security to the point where we need to focus on the data and entitlements, practitioners should be thinking about what that really means as cloud adoption within data centers and via external providers continues to increase.

Enter cybersecurity artificial intelligence (AI). In an interview with Information Security Media Group, Senseon Founder and CEO David Atkinson defined AI as “the aspiration to build machines that could emulate what we do as people.” The irony is that people make mistakes. To err is human. So how can we use AI to mitigate the risks that come directly from the poor cyber hygiene of human beings?

Knock Knock. Who’s There on the Network?

Determining who is on the network is a matter of critical importance in enterprise security. Given the number of passwords that have been leaked in data breaches, it’s increasingly more important for employees to use secure passwords. Unfortunately, many users haven’t fully adopted good password habits; they continue to reuse the same password across multiple accounts. Often, those passwords are weak, making it easier for an attacker to make an educated guess and gain access to the network.

Organizations first need to determine who their people are and what each of those individual users needs to access within the organization. Then, figure out how to deliver those processes in a quality, secure way — or, as Peters put it, “We now need to think from the inside out.” Cybersecurity AI offers significant progress in enabling this inside-out approach to authentication and identity management.

“AI is starting to give us the capability to establish user behavior, user patterns and why they are doing what they are doing,” Peters said.

Where Has All the Data Gone?

Another example of poor cyber hygiene is when organizations collect and store data endlessly — when there is no end to the data collected and no destruction of the data that is no longer serving a purpose. This lack of policy over the complete life cycle of data poses security risks to the organization in the event of a cyberattack. The use of cybersecurity AI gives a broader view of technology assets and identities.

“Through the output of a lot of the data tools, AI can now determine why certain pieces of data are labeled differently and trigger a certification or remediation,” noted Peters.

As a result, security teams can then look at why certain aspects of data are no longer being accessed and make more informed decisions about whether they are going to certify the confidentiality or integrity of that data.

The Challenges of Applying Cybersecurity AI

Of course, while AI is a very powerful tool to defend against cyberattacks, Atkinson noted that there are limitations to what AI can and cannot do. Applying AI is the application of complex mathematics to a complex and ever-changing data set. It has its challenges.

“Sorry to break this to you,” Atkinson said, “but people are weird and technology is weird. At the same time in enterprises, you have good attackers trying to behave normally. It takes a great deal of talent and a lot of specific engineering, but it’s a problem worth solving.”

By building AI models around the life cycle of the user, security teams can start to outline patterns for normal behavior and detect patterns that seem abnormal, but it’s indeed a learning process.

“If leaders of the organization are not thinking holistically yet, AI can be a beneficial tool that enables the security team to establish outliers around users and systems that are not following a pattern,” Peters said. AI has the ability to identify when users haven’t logged in or haven’t been on a network, but there is a process to building out these models, which takes time.

If You Build It, AI Can Help

No technology is perfect, but the use of AI and machine learning capabilities does help mitigate the risk of insider threats. In most cases, users aren’t acting maliciously, which is why building the models is critical to mitigating the risk of both insider and outsider threats.

Through the use of machine learning algorithms, security teams can see when a user’s risk posture spikes, then look to see whether there are additional abnormal activities, such as exfiltration of data, that would require further investigation to see if the user is acting maliciously or the account has been compromised.

By understanding that human behavior doesn’t really change, and will likely never evolve as rapidly as technology, organizations can consider the security solutions that will not only protect them from motivated attackers, but also mitigate the risks from human error. As AI technologies continue to evolve, they will be even more useful in authenticating users and identifying potential cyberattacks from the inside out.

More from Artificial Intelligence

Could a threat actor socially engineer ChatGPT?

3 min read - As the one-year anniversary of ChatGPT approaches, cybersecurity analysts are still exploring their options. One primary goal is to understand how generative AI can help solve security problems while also looking out for ways threat actors can use the technology. There is some thought that AI, specifically large language models (LLMs), will be the equalizer that cybersecurity teams have been looking for: the learning curve is similar for analysts and threat actors, and because generative AI relies on the data…

AI vs. human deceit: Unravelling the new age of phishing tactics

7 min read - Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown. To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on…

C-suite weighs in on generative AI and security

3 min read - Generative AI (GenAI) is poised to deliver significant benefits to enterprises and their ability to readily respond to and effectively defend against cyber threats. But AI that is not itself secured may introduce a whole new set of threats to businesses. Today IBM’s Institute for Business Value published “The CEO's guide to generative AI: Cybersecurity," part of a larger series providing guidance for senior leaders planning to adopt generative AI models and tools. The materials highlight key considerations for CEOs…

Does your security program suffer from piecemeal detection and response?

4 min read - Piecemeal Detection and Response (PDR) can manifest in various ways. The most common symptoms of PDR include: Multiple security information and event management (SIEM) tools (e.g., one on-premise and one in the cloud) Spending too much time or energy on integrating detection systems An underperforming security orchestration, automation and response (SOAR) system Only capable of taking automated responses on the endpoint Anomaly detection in silos (e.g., network separate from identity) If any of these symptoms resonate with your organization, it's…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today