The idea of bad actors stealing valuable assets brings to mind a picture of masked men breaking into a bank vault or museum and making a getaway with their illicit stash. But what if the enemy is one of us — someone who knows exactly where we keep our most valuable items, how we safeguard them and even the alarm code to disable the entire security system?

Distinguishing Malicious Insiders From Legitimate Users

Organizations hold patents, intellectual property, client data and other valuable information, and thousands of employees need access to those assets for legitimate reasons. With so much at stake, it is critical for security teams to be able to identify rogue staffers and determine whether their access credentials have been compromised by an external actor to get on the inside.

But how can security teams distinguish malicious insiders from legitimate users when suspicious activity closely resembles typical behavior? They must model the user’s normal behavior and measure this against subtle characteristic changes and anomalous activity using user behavior analytics (UBA).

Anomalous activity can include a user logging in from a different geographic location, logging in via a virtual private network (VPN) at odd hours, or transferring high volumes of data from the network to an external site or cloud storage account. Any one of these activities by itself does not necessarily indicate malicious intent, but the combination of several suspicious behaviors warrants investigation by a security operations center (SOC) analyst to determine whether the user has gone rogue or had credentials stolen. Each anomalous activity increases the user’s risk score. When it crosses a certain threshold, the user needs to be investigated or closely monitored.

Unlocking the Power of Machine Learning

Rules-based anomaly detection is a great way to identify illicit behaviors, but what if the clues are much more subtle? That’s where machine learning can help.

Let’s take a look at the activities of an employee in the marketing department, for example:

If this employee plans to quit his or her job and is looking to take proprietary data to a rival firm, he or she might exhibit the following behavior:

You’ll notice that the user does not change his or her routine drastically but exhibits certain subtle activity changes that indicate malicious intent.

A UBA solution powered by machine learning uses unsupervised learning to help model a user’s behavior in various categories, such as authentication, network access, firewall activity, application activity, port or network scans, denial-of-service events, malware or other malicious software activity. The user’s risk score is increased based on deviation from the baseline established by the model. The model also identifies deviation from normal activity versus frequency to give you a picture of the user’s risk posture.

Peer group analytics offer yet another lens into a user’s activities to help identify when a user deviates from the typical behavior of employees with similar roles and responsibilities.

Learn More

Learn more about QRadar User Behavior Analytics and try the free QRadar UBA app from the IBM Security App Exchange. You can also watch this video to learn how you can combine QRadar UBA and QRadar Advisor with Watson to investigate suspicious behavior.

If you are attending Think 2018 in Las Vegas, check out the Security and Resiliency Campus and attend these sessions on user behavior analytics:

Watch now! View the Think 2018 Security & Resiliency Sessions on-demand

More from Artificial Intelligence

Tackling Today’s Attacks and Preparing for Tomorrow’s Threats: A Leader in 2022 Gartner® Magic Quadrant™ for SIEM

Get the latest on IBM Security QRadar SIEM, recognized as a Leader in the 2022 Gartner Magic Quadrant. As I talk to security leaders across the globe, four main themes teams constantly struggle to keep up with are: The ever-evolving and increasing threat landscape Access to and retaining skilled security analysts Learning and managing increasingly complex IT environments and subsequent security tooling The ability to act on the insights from their security tools including security information and event management software…

4 Ways AI Capabilities Transform Security

Many industries have had to tighten belts in the "new normal". In cybersecurity, artificial intelligence (AI) can help.   Every day of the new normal we learn how the pandemic sped up digital transformation, as reflected in the new opportunities and new risks. For many, organizational complexity and legacy infrastructure and support processes are the leading barriers to the effectiveness of their security.   Adding to the dynamics, short-handed teams are overwhelmed with too much data from disparate sources and…

What’s New in the 2022 Cost of a Data Breach Report

The average cost of a data breach reached an all-time high of $4.35 million this year, according to newly published 2022 Cost of a Data Breach Report, an increase of 2.6% from a year ago and 12.7% since 2020. New research in this year’s report also reveals for the first time that 83% of organizations in the study have experienced more than one data breach and just 17% said this was their first data breach. And at a time when…

Real Security Concerns Are Scarier Than Doomsday Predictions

The metaverse, artificial intelligence (AI) run amok, the singularity ... many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks. While nobody can predict the future, should we worry about any of these issues? What's the difference between a real threat and hype? The Promise of the Metaverse…