False positive alerts in your threat intel platform can leave your team scrambling. It’s like driving to the wrong address. You reach a place, but also waste time you could have used at your intended destination. For security teams, knowing how to screen for false positives saves time and makes the team more efficient at addressing real threats. Learn how to screen threats as part of an incident response plan in a modern security operations center (SOC), and what to do when you spot a mistake.
What Do False Positives Look Like?
False positives are a common issue in threat intelligence, security operations and incident response. Mislabeled indicators of compromise or false security alerts indicate there is a problem when there isn’t. For analysts, they are a distraction from real problems and can result in wasted time and resources.
False positives can steer your operation in the wrong direction, regardless of whether you use them for blocking purposes, such as to filter out malicious activity, or for detection, including forensic investigations. If you use indicators for threat analysis, a threat intel report containing wrongly-labeled indicators could set you on the wrong track when evaluating risks or threats.
In addition to false positives, your team also may encounter false negatives. False negatives lead to overlooking cybersecurity incidents or threats. Both types of false information can cause frustration and problems. Besides the obvious loss of resources and the financial cost, they can be costly for an organization’s reputation and make it harder to get future collaboration from operational teams.
For terminology’s sake, note that a team may also encounter true positives, meaning an event is correctly identified, and true negatives, where an event is correctly rejected.
Good Incident Response Plans Include Context
False positives are often a contextual problem and can be different for each organization or person. What one organization considers a true alert is considered as a false positive by another organization. An example of this is the execution of Teamviewer or domain name system (DNS) requests to Google public resolver 8.8.8.8.
When examining a report, you can gain additional context in several different ways, including:
- More descriptive details of an indicator, such as specific options or arguments used to launch a process.
- Examine how an indicator relates to other indicators in the form of a chain of activities, dependent actions or a set of used tactics and techniques.
- Details on the conditions and environment where an indicator should be observed.
- The lifecycle of indicators. An indicator that was relevant and accurate five years ago might not mark malicious behavior today.
Without this essential context, you’re mostly stuck with threat data. This data can be useful, but it is prone to errors and misinterpretation and is less valuable for incident response than fully contextualized information.
Unfortunately, there is no single approach that will remove all false positives. There are a some steps to limit their frequency and impact on your incident response plan.
1. Prevent False Positives From Being Added to the Threat Intel Report
First, prevent false positives from being added to your data. Internal or company assets are typically the type of indicators you do not want in your cyber threat intel. For example, the names of your organization proxy server or company Sharepoint sites are not indicators you would use to trigger a security alert. If you share threat information with other organizations, having this information included in your shared data poses the risk of disclosing sensitive internal information. Those indicators can be useful to provide additional context, though.
Besides company information, indicators belonging to RFC 1918 CIDR blocks (the “private” IP addresses), RFC 3849 (the IPv6 documentation prefix) or RFC 6761 (the list of special-use domain names) should rarely be included.
2. Likelihood of False Positives in Threat Intel Reports
A second step is to notify analysts about the likelihood of false positives. This often concerns benign indicators, such as the Google public DNS resolver mentioned earlier. The notification can be based on whitelists of the network address space belonging to platform-as-a-service providers, known port scanners, popular domains, such as the Alexa Top 1M, root name servers, popular public resolvers and commonly used cloud services, such as Office 365.
Present this notification to your analysts with a visual indication when they process threat data. It can also be included in security alerts based on these indicators, including a confidence score as false-positive:risk=”high”.
Once you are more confident in the quality of your whitelist, you can immediately prevent these indicators from being included in your threat data. Be careful not to blindly enable or trust whitelists. Think about what aspects of the data is controlled by adversaries. You certainly do not want to enable a whitelist that contains the essential infrastructure of one of your most feared adversaries.
A good example of community-provided whitelists are those available for the Malware Information Sharing Platform warning lists.
3. Report Sightings, Observables and False Positives
A third step is the reporting of sightings, whether as observations or as an indication of a false positive. The concept of sightings allows you to give a score, or quality level, to indicators. It is a feedback loop from the security operational team and staff in incident response to the threat intelligence team. It provides them a possibility to acknowledge they observed activity, either related to malicious activity or as a false positive.
The key in sightings is to make reporting and usage as simple as possible. One very straightforward solution is SightingsDB, which can even be run from a Docker container.
4. Inform Analysts About Sightings
The fourth step is informing analysts about these sightings. You can choose to report individual sightings on a regular basis, as well as to report them once they are above a certain threshold. Possible ways to present the data include via dashboards in your threat intel platform or via email notifications.
The best way to handle this step is to assign a dedicated role in the cyber threat intelligence team. This person can go through the reported sightings on a regular basis and validate the reports.
5. Disable the Indicator To Streamline Your Cyber Threat Intel
The fifth, and last, step is to disable the indicator from being actionable and included in your cyber threat intel. This prevents these indicators from being further distributed to your security controls or made available to the incident response team.
Depending on the size of the cyber threat intelligence team, you can have this done by the same role that reviewed the sightings, or you can implement a form of four-eyes review and assign the role to someone else. Ideally, this will lead to fewer false positives in the future and more time for your team to address true threats.