June 26, 2019 By Kacy Zurkus 3 min read

Traditionally, information security has been about protecting the network against external threats. As innovation and the cloud have slowly chipped away at the perimeter, however, organizations have become challenged to defend against not only nefarious actors from the outside, but also malicious insiders within the company’s digital walls.

“With the evolution of modern techniques and exploitation of the end user, we are on the cusp of a new world where most threats resemble or leverage the insider one way or another, willingly or unwillingly,” said Adrian Peters, board member of the Internet Security Alliance. According to Peters, because convergence is driving security to the point where we need to focus on the data and entitlements, practitioners should be thinking about what that really means as cloud adoption within data centers and via external providers continues to increase.

Enter cybersecurity artificial intelligence (AI). In an interview with Information Security Media Group, Senseon Founder and CEO David Atkinson defined AI as “the aspiration to build machines that could emulate what we do as people.” The irony is that people make mistakes. To err is human. So how can we use AI to mitigate the risks that come directly from the poor cyber hygiene of human beings?

Knock Knock. Who’s There on the Network?

Determining who is on the network is a matter of critical importance in enterprise security. Given the number of passwords that have been leaked in data breaches, it’s increasingly more important for employees to use secure passwords. Unfortunately, many users haven’t fully adopted good password habits; they continue to reuse the same password across multiple accounts. Often, those passwords are weak, making it easier for an attacker to make an educated guess and gain access to the network.

Organizations first need to determine who their people are and what each of those individual users needs to access within the organization. Then, figure out how to deliver those processes in a quality, secure way — or, as Peters put it, “We now need to think from the inside out.” Cybersecurity AI offers significant progress in enabling this inside-out approach to authentication and identity management.

“AI is starting to give us the capability to establish user behavior, user patterns and why they are doing what they are doing,” Peters said.

Where Has All the Data Gone?

Another example of poor cyber hygiene is when organizations collect and store data endlessly — when there is no end to the data collected and no destruction of the data that is no longer serving a purpose. This lack of policy over the complete life cycle of data poses security risks to the organization in the event of a cyberattack. The use of cybersecurity AI gives a broader view of technology assets and identities.

“Through the output of a lot of the data tools, AI can now determine why certain pieces of data are labeled differently and trigger a certification or remediation,” noted Peters.

As a result, security teams can then look at why certain aspects of data are no longer being accessed and make more informed decisions about whether they are going to certify the confidentiality or integrity of that data.

The Challenges of Applying Cybersecurity AI

Of course, while AI is a very powerful tool to defend against cyberattacks, Atkinson noted that there are limitations to what AI can and cannot do. Applying AI is the application of complex mathematics to a complex and ever-changing data set. It has its challenges.

“Sorry to break this to you,” Atkinson said, “but people are weird and technology is weird. At the same time in enterprises, you have good attackers trying to behave normally. It takes a great deal of talent and a lot of specific engineering, but it’s a problem worth solving.”

By building AI models around the life cycle of the user, security teams can start to outline patterns for normal behavior and detect patterns that seem abnormal, but it’s indeed a learning process.

“If leaders of the organization are not thinking holistically yet, AI can be a beneficial tool that enables the security team to establish outliers around users and systems that are not following a pattern,” Peters said. AI has the ability to identify when users haven’t logged in or haven’t been on a network, but there is a process to building out these models, which takes time.

If You Build It, AI Can Help

No technology is perfect, but the use of AI and machine learning capabilities does help mitigate the risk of insider threats. In most cases, users aren’t acting maliciously, which is why building the models is critical to mitigating the risk of both insider and outsider threats.

Through the use of machine learning algorithms, security teams can see when a user’s risk posture spikes, then look to see whether there are additional abnormal activities, such as exfiltration of data, that would require further investigation to see if the user is acting maliciously or the account has been compromised.

By understanding that human behavior doesn’t really change, and will likely never evolve as rapidly as technology, organizations can consider the security solutions that will not only protect them from motivated attackers, but also mitigate the risks from human error. As AI technologies continue to evolve, they will be even more useful in authenticating users and identifying potential cyberattacks from the inside out.

More from Artificial Intelligence

AI cybersecurity solutions detect ransomware in under 60 seconds

2 min read - Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.Criminals have been…

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today