June 26, 2019 By Kacy Zurkus 3 min read

Traditionally, information security has been about protecting the network against external threats. As innovation and the cloud have slowly chipped away at the perimeter, however, organizations have become challenged to defend against not only nefarious actors from the outside, but also malicious insiders within the company’s digital walls.

“With the evolution of modern techniques and exploitation of the end user, we are on the cusp of a new world where most threats resemble or leverage the insider one way or another, willingly or unwillingly,” said Adrian Peters, board member of the Internet Security Alliance. According to Peters, because convergence is driving security to the point where we need to focus on the data and entitlements, practitioners should be thinking about what that really means as cloud adoption within data centers and via external providers continues to increase.

Enter cybersecurity artificial intelligence (AI). In an interview with Information Security Media Group, Senseon Founder and CEO David Atkinson defined AI as “the aspiration to build machines that could emulate what we do as people.” The irony is that people make mistakes. To err is human. So how can we use AI to mitigate the risks that come directly from the poor cyber hygiene of human beings?

Knock Knock. Who’s There on the Network?

Determining who is on the network is a matter of critical importance in enterprise security. Given the number of passwords that have been leaked in data breaches, it’s increasingly more important for employees to use secure passwords. Unfortunately, many users haven’t fully adopted good password habits; they continue to reuse the same password across multiple accounts. Often, those passwords are weak, making it easier for an attacker to make an educated guess and gain access to the network.

Organizations first need to determine who their people are and what each of those individual users needs to access within the organization. Then, figure out how to deliver those processes in a quality, secure way — or, as Peters put it, “We now need to think from the inside out.” Cybersecurity AI offers significant progress in enabling this inside-out approach to authentication and identity management.

“AI is starting to give us the capability to establish user behavior, user patterns and why they are doing what they are doing,” Peters said.

Where Has All the Data Gone?

Another example of poor cyber hygiene is when organizations collect and store data endlessly — when there is no end to the data collected and no destruction of the data that is no longer serving a purpose. This lack of policy over the complete life cycle of data poses security risks to the organization in the event of a cyberattack. The use of cybersecurity AI gives a broader view of technology assets and identities.

“Through the output of a lot of the data tools, AI can now determine why certain pieces of data are labeled differently and trigger a certification or remediation,” noted Peters.

As a result, security teams can then look at why certain aspects of data are no longer being accessed and make more informed decisions about whether they are going to certify the confidentiality or integrity of that data.

The Challenges of Applying Cybersecurity AI

Of course, while AI is a very powerful tool to defend against cyberattacks, Atkinson noted that there are limitations to what AI can and cannot do. Applying AI is the application of complex mathematics to a complex and ever-changing data set. It has its challenges.

“Sorry to break this to you,” Atkinson said, “but people are weird and technology is weird. At the same time in enterprises, you have good attackers trying to behave normally. It takes a great deal of talent and a lot of specific engineering, but it’s a problem worth solving.”

By building AI models around the life cycle of the user, security teams can start to outline patterns for normal behavior and detect patterns that seem abnormal, but it’s indeed a learning process.

“If leaders of the organization are not thinking holistically yet, AI can be a beneficial tool that enables the security team to establish outliers around users and systems that are not following a pattern,” Peters said. AI has the ability to identify when users haven’t logged in or haven’t been on a network, but there is a process to building out these models, which takes time.

If You Build It, AI Can Help

No technology is perfect, but the use of AI and machine learning capabilities does help mitigate the risk of insider threats. In most cases, users aren’t acting maliciously, which is why building the models is critical to mitigating the risk of both insider and outsider threats.

Through the use of machine learning algorithms, security teams can see when a user’s risk posture spikes, then look to see whether there are additional abnormal activities, such as exfiltration of data, that would require further investigation to see if the user is acting maliciously or the account has been compromised.

By understanding that human behavior doesn’t really change, and will likely never evolve as rapidly as technology, organizations can consider the security solutions that will not only protect them from motivated attackers, but also mitigate the risks from human error. As AI technologies continue to evolve, they will be even more useful in authenticating users and identifying potential cyberattacks from the inside out.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today