Indicators of compromise (IoCs) are artifacts such as file hashes, domain names or IP addresses that indicate intrusion attempts or other malicious behavior. These indicators consist of:

  • Observables — measurable events or stateful properties; and
  • Indicators — observables with context, such as time range.

IoCs are crucial for sharing threat information and can help organizations verify whether they are affected by a computer security incident. It is also a good idea to share IoCs with your peers if you have been the victim of a security breach.

Automatic and Manual Threat Sharing

Automatic and fluent sharing of threat intelligence is important to help security professionals respond to incidents or, ideally, detect them before they take place. Automatic sharing can take place via Trusted Automated Exchange of Intelligence Information (TAXII), a set of specifications for transferring cyberthreat information. The information can be represented in the Structured Threat Information Expression (STIX) language.

In 2015, we discussed the advantages of using STIX, TAXII and CybOX to increase and facilitate information sharing with your peers. Since then, the STIX language has evolved from version 1.x to version 2. Below are some major differences.

  • Version 2 replaces Extensible Markup Language (XML) with JavaScript Object Notation (JSON).
  • CybOX objects are now called STIX Cyber Observables, resulting in only one specification.
  • All the objects, called STIX Domain Objects (SDO), are at the top level. Relationships between these top-level objects are represented by a specific relationship object.
  • The generic tactics, techniques and procedures (TTPs) and exploit target types from STIX 1.x have been removed and replaced by separate top-level objects.

Although STIX, TAXII and tooling such as malware information sharing platforms (MISPs) have been around for a while, some organizations are unable to take advantage of automatic threat sharing due to a shortage of available resources, lack of security maturity or insufficient internal processes or tools to consume threat information.

On some occasions, the sender, rather than the receiver, is unable to automatically process threat information. This inability is not always caused by technical constraints. This does not mean, however, that these organizations should be completely deprived of threat intelligence.

Email can be a good fallback for manually exchanging threat information. On rare occasions, the exchange can be done in person. Email can also provide a way to receive threat information from automated systems because some systems can send notifications on new events. This isn’t as flexible or efficient as consuming the information automatically, but it is still better than not receiving the information at all.

Message and Content Verification

There are a number of required verification steps to complete before you consume indicators received via manual exchange. These steps can also be applied to automatic processing.

  • Verify the source of the message. Emails containing IoCs can be used in phishing attacks. Be vigilant if the indicators are included in a document that could contain malicious code, and be sure to verify the sender information.
  • Contact the sender via out-of-band communication to confirm the receipt of the message and that the message indeed originated from the correct sender. Use public or known resources to contact the sender, not the telephone number included in the email signature.
  • Verify the integrity of the message to make sure the indicator information was not altered during transfer. You can do this by using a digital verification, file signature or out-of-band verification mechanism.

Indicators require context to be useful. If someone sends you a message with a bunch of IP addresses and file hashes but no information about which campaign they belong to or how old they are, it’s almost impossible to do something useful with them. How much historical information should you use to verify the IoCs? Should you immediately isolate a host and start an incident response plan when a match has been found? In addition to these key questions, security professionals should ask the following when receiving IoCs:

  • What is the campaign or threat actor?
  • Are specific sectors targeted? If so, is your organization part of these sectors?
  • What is the sophistication level of the attackers? Are they abusing system vulnerabilities or employing social engineering tactics? Are they script kiddies or state-sponsored actors?
  • Is the attackers’ goal to disrupt a service or to leak sensitive information?
  • Is there any relationship between the indicators? Should they be combined or used as single entities?
  • What is the applicable time frame? Should you use historical information?
  • Is there a specific applicable environment of the indicators? Should you check them on workstations, servers or specific devices or networks?
  • What is the expected reaction when a match is found? Should you block or isolate the source or leave everything untouched and monitor its behavior more closely? Blocking should never happen without logging the source of the request.

Sharing and Distribution of Indicators

It’s likely that you will not be the single person applying all the IoCs. This means that you will have to distribute them to operational teams. Before doing so, you should determine whether sharing is allowed.

Sharing indicators can be restricted on legal grounds. A practical way to restrict sharing is to use the Traffic Light Protocol (TLP), a set of designations designed to facilitate the exchange of sensitive information.

TLP has been widely adopted in the computer security incident response team (CSIRT) and security communities. The originator of the information labels the information with one of four colors. These colors indicate what further dissemination, if any, is allowed after the information is transmitted to the original receiver. Note that the colors only mark the level of dissemination, not the sensitivity level, although they often align.

Information shared under TLP:RED — meaning that it is for your eyes only — is difficult to process since you, the receiver, are the only one who can use it. You are not even allowed to share it with people within your organization. TLP:RED can be used in person to inform on new threat actors or campaigns, for example, but it’s less practical for sharing IP addresses and file hashes.

Similarly, information marked as TLP:AMBER should be handled with caution. The old definition of TLP limited TLP:AMBER to your organization only. The new definition allows for sharing, when necessary, with clients and customers. If you’re not certain, check with the sender and request a constituent restriction to be added.

If you have confirmed that you are allowed to share the indicators, create distribution packages for the operational teams. The best way to deal with this is to agree on a process beforehand in which you know what to expect from the operational teams and the teams understand the motivation and urgency of your request. The process should cover the urgency of implementation, the scope of the indicators, the expected action on a hit and the frequency and urgency of reporting. Make sure your process also includes an acknowledgment of receipt and implementation by the operational team.

Applying IoCs

Applying IoCs without having a logging system is difficult. Ideally, you have central logging that collects network, server, workstation and application information.

Network Information

Start with network IoCs that can be deployed on central infrastructure where a lot of the network traffic passes through. Firewalls, switches and netflow sources can be used to track network flows and detect access to IP addresses. Proxy servers log the internet traffic of your users and can be used to follow up on requests to malicious websites. Security teams can use domain name servers (DNS) to detect resolving of malicious domain names and access to malicious IP addresses. Finally, mail servers can help analysts verify whether users received specific malicious email messages.

Server, User and Workstation Information

IoCs that handle file hashes, registry keys, mutual exclusion objects or installed services require a per-host investigation. Obviously, this doesn’t mean that you have to manually check every asset in your environment. Most security suites include features to query this information, but you can also use custom scripting to achieve the same results.

If your security suite does not offer a good query feature, you can use the free LOKI scanner. LOKI comes with a default set of rules that you can extend with your own IoCs. LOKI can be deployed as a software package in your infrastructure, but be aware that the set of IoCs are not encrypted, meaning they can possibly be altered by malware. Make sure the rules are stored on a read-only share.

LOKI supports YARA rules. YARA is a pattern-matching Swiss army knife to help analysts identify and classify malware samples. These rules allow security teams to define a set of conditions that have to be met before flagging a hit. This combination of conditions can greatly reduce the risk of false positives, instead of having to rely on single indicators.

IoC Hits and False Positives

When an IoC generates a hit, it is important to receive as much information as possible on the source that causes the match. For example, determine which user an installed service is using or collect metadata, time stamp and user information for file matches. Depending on the IoC and the match, you then can start a forensic investigation process to evaluate whether a host is indeed compromised and whether you are the victim of an attack.

Inform the operational teams on how to react when an IoC generates a hit. Not every hit immediately means that malicious activity took place. Some IoCs are prone to false positives, and defining what indicators are more likely to cause false positives is a matter of experience and varies by case. Most importantly, it depends on the context of the IoC.

A file hash of a known malicious artifact or a connection to an IP address used exclusively by attackers is almost certainly a sign of malicious activity. Connections to an IP address belonging to shared hosting infrastructure are more likely to produce false positives. You can increase your options for excluding false positives by enriching your logging information with different sources.

Create Your Own IoCs

If you have been the victim of a computer security incident and you’ve done the proper analysis, it might be useful to share your findings with your peers. Many organizations are reluctant to share information, but sharing threat data allows your peers to update their protection mechanisms.

You can share the results of a malware analysis or other characteristics of the intrusion using public malware sandboxes. Most importantly, you should gather information relevant to the intrusion and include the observed objectives of the attacker.

Consuming IoCs means more than merely searching your logs for certain single indicators. It requires a well-defined process and a dedication of resources. These resources are necessary to apply the IoCs and understand the threats and attack vectors described in the IoC document. As with most things, this requires training. For this reason, security professionals should frequently test and update their process of consuming and applying indicators.

More from Fraud Protection

What’s up India? PixPirate is back and spreading via WhatsApp

8 min read - This blog post is the continuation of a previous blog regarding PixPirate malware. If you haven’t read the initial post, please take a couple of minutes to get caught up before diving into this content. PixPirate malware consists of two components: a downloader application and a droppee application, and both are custom-made and operated by the same fraudster group. Although the traditional role of a downloader is to install the droppee on the victim device, with PixPirate, the downloader also…

Unveiling the latest banking trojan threats in LATAM

9 min read - This post was made possible through the research contributions of Amir Gendler.In our most recent research in the Latin American (LATAM) region, we at IBM Security Lab have observed a surge in campaigns linked with malicious Chrome extensions. These campaigns primarily target Latin America, with a particular emphasis on its financial institutions.In this blog post, we’ll shed light on the group responsible for disseminating this campaign. We’ll delve into the method of web injects and Man in the Browser, and…

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today