The idea of bad actors stealing valuable assets brings to mind a picture of masked men breaking into a bank vault or museum and making a getaway with their illicit stash. But what if the enemy is one of us — someone who knows exactly where we keep our most valuable items, how we safeguard them and even the alarm code to disable the entire security system?

Distinguishing Malicious Insiders From Legitimate Users

Organizations hold patents, intellectual property, client data and other valuable information, and thousands of employees need access to those assets for legitimate reasons. With so much at stake, it is critical for security teams to be able to identify rogue staffers and determine whether their access credentials have been compromised by an external actor to get on the inside.

But how can security teams distinguish malicious insiders from legitimate users when suspicious activity closely resembles typical behavior? They must model the user’s normal behavior and measure this against subtle characteristic changes and anomalous activity using user behavior analytics (UBA).

Anomalous activity can include a user logging in from a different geographic location, logging in via a virtual private network (VPN) at odd hours, or transferring high volumes of data from the network to an external site or cloud storage account. Any one of these activities by itself does not necessarily indicate malicious intent, but the combination of several suspicious behaviors warrants investigation by a security operations center (SOC) analyst to determine whether the user has gone rogue or had credentials stolen. Each anomalous activity increases the user’s risk score. When it crosses a certain threshold, the user needs to be investigated or closely monitored.

Unlocking the Power of Machine Learning

Rules-based anomaly detection is a great way to identify illicit behaviors, but what if the clues are much more subtle? That’s where machine learning can help.

Let’s take a look at the activities of an employee in the marketing department, for example:

If this employee plans to quit his or her job and is looking to take proprietary data to a rival firm, he or she might exhibit the following behavior:

You’ll notice that the user does not change his or her routine drastically but exhibits certain subtle activity changes that indicate malicious intent.

A UBA solution powered by machine learning uses unsupervised learning to help model a user’s behavior in various categories, such as authentication, network access, firewall activity, application activity, port or network scans, denial-of-service events, malware or other malicious software activity. The user’s risk score is increased based on deviation from the baseline established by the model. The model also identifies deviation from normal activity versus frequency to give you a picture of the user’s risk posture.

Peer group analytics offer yet another lens into a user’s activities to help identify when a user deviates from the typical behavior of employees with similar roles and responsibilities.

Learn More

Learn more about QRadar User Behavior Analytics and try the free QRadar UBA app from the IBM Security App Exchange. You can also watch this video to learn how you can combine QRadar UBA and QRadar Advisor with Watson to investigate suspicious behavior.

If you are attending Think 2018 in Las Vegas, check out the Security and Resiliency Campus and attend these sessions on user behavior analytics:

Watch now! View the Think 2018 Security & Resiliency Sessions on-demand

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today