Your security incident detection capabilities are at the heart of your organization’s incident response plan. After all, if you are unable to recognize incidents, it is not possible to start an incident response plan. Although incidents can occur in countless ways — and it’s impossible to detect every possible attack scenario — you need to develop a strategy for testing and improving your existing detection capabilities that incorporates methods ranging from testing on paper to running a full-blown cyber simulation.
Your testing strategy should go hand-in-hand with general efforts to improve your team, and it should also mandate sufficient quality checks. Just as you wouldn’t want your antivirus to trigger on benign, nonmalicious files, you wouldn’t want to start an incident response plan based on insufficient or faulty detection capabilities.
Traditional Testing Strategies
A paper test is what it sounds like: a test on paper. It’s often the first step in your strategy, and its output is used to develop incident response plans. You can start by identifying your key business assets and data flows and documenting your existing detection capabilities. Some key points to consider include:
- How easy would it be for malicious actors to bypass your detection capabilities?
- How reliable, complete and accurate is your information?
- Do you have direct access to the detection information, or is the information curated first? Assessing any assets that are not under your control can be a challenge. There’s a big difference between having direct access to log events and having to rely on a weekly report from a service provider.
For a paper test, you can start by simulating data flows and describing where and how you intend to detect malicious activity. There are limits to this method, though, as this kind of test leaves considerable room for error and typically should only be used in the initial overview of your capabilities.
A more structured approach utilizes a tabletop exercise that would allow your team to produce a cyber simulation on paper and then review and practice your response to the situation at hand. While a paper test is often more focused on one asset, a tabletop exercise takes a more holistic approach that can show you the areas of business (or logic) where your detection coverage could be improved.
A tabletop exercise can be conducted with just technical personnel, but the simulation tends to be most effective if you have stakeholders from different departments participating.
Cyber Simulation Datasets
Different Types of Datasets
The next step is to use datasets for cyber simulation. A common approach involves developing simulations based on the content of linked datasets, inputting the datasets into your security solutions and checking if they trigger alarms. For example:
- Use datasets with network information to test the detection rules of your intrusion detection system (IDS):
- One of the largest collections is at Malware-Traffic-Analysis.net, which has captures of malspam, malware and ransomware infections. Other sources include the Shadowbrokers PCAPs with traces of EternalRomance and DoublePulsar and a collection of malware traffic by Contagio.
- PCAP files can be replayed with tcpreplay or ingested directly by your IDS.
- Scan host-based artifacts to test endpoint protection:
- Start by downloading the EICAR or WICAR test samples and using repositories from Malshare, VirusShare or theZoo.
- You do not have to execute the samples — the presence of the file should be enough to trigger detection. Take care though, and do this only in a controlled environment.
- Load system and application logs to tune a security information and event management (SIEM) solution:
- The datasets of Los Alamos National Laboratory‘s network can enable you to verify that your SIEM can distinguish between malicious and nonmalicious traffic. The set of Windows event log files associated with specific attack and post-exploitation techniques can help you verify that your SIEM is able to spot malicious behavior.
Harness Your Datasets
There are a number of key points to consider as you use these simulation datasets. First, you should verify that your security device actually alerts on elements’ datasets. Take into consideration whitelists that are applied and the limitations of detection. If a detection is based on elements in a log file, then verify that those elements (or events) are at least logged. A good example here is Sigma rules, which rely heavily on logged Windows events.
Next, check that the alert is complete and provides sufficient information for further investigation. The alert should at least contain a timestamp, the reason for alerting and the affected asset. Then you can verify that the detected alert is accurate. This means that detection should only trigger on malicious activity and shouldn’t cause too many false positives. To tune your rule set, you will need a corpus of “goodware.” For example, for YARA, you might use the retrohunts on goodware feature by VirusTotal.
Finally, make sure that the alert and the associated artifacts can be processed by your forensic investigation tools so you can follow up with your incident response. One way of preparing these tools is to test them against reference datasets —like the Computer Forensic Reference Data Sets by NIST or the disk images by Digital Corpora — before the fact so you understand their limitations and strengths.
Explore Advanced Simulation Datasets
If you have explored these datasets, you can move forward and use more advanced datasets, such as the set from Splunk, Boss of the SOC (BOTS), which combines Windows events, IDS events and application data. Other datasets include repositories of public exploits from malSploitBase or exploit-db (which can help you test both network and endpoint detection capabilities) and the Atomic Red Team, which can help test endpoints.
One dataset worth mentioning is traffic captures that are made available by malware analysis sandboxes, either internal or public. For example, the malware sandbox Hybrid Analysis provides not only PCAP files but also samples of other related malware files. You do need to sign up first. Descriptions of detection methods listed among the MITRE ATT&CK techniques can be sources of inspiration as well.
Additionally, you can always create your own datasets by using tests included in Metasploit or by crafting your own network traffic via pCraft. But crafting your own network traffic is only the beginning — the next step is emulating adversary behavior. You can do this via blue team/purple team exercises, but this is only really effective if you have well-established incident response processes already. Another potential approach is to develop your own scenarios. As it happens, Caldera by MITRE builds on the ATT&CK framework and is fit for this job.
Automate Your Detection Capabilities
The holy grail of verifying your detection capabilities is automated adversary emulation. The Caldera framework built by MITRE is a perfect fit, but it does require involvement from your team. The biggest advantage is that it’s built on the MITRE ATT&CK framework, so you can reuse it to demonstrate your detection coverage later on.
Improving your detection capabilities can start with a simple paper list that documents strengths and weaknesses. Once you have produced this, you can challenge your readiness via tabletop exercises and certify your protections with bits and bytes via cyber simulation datasets.
Koen Van Impe is a security analyst who worked at the Belgian national
CSIRT and is now an independent security researcher.
He has a twitter feed (@cudes...