April 2, 2018 By Koen Van Impe 5 min read

If you do incident response work, you know it doesn’t matter whether you work for a large corporation or a small organization — an incident can strike at any given time.

Unfortunately, there are often huge time lapses between when an incident occurs, when it is detected and when the security team can address it. That’s why, as the threat landscape evolves and expands, it’s increasingly critical to adopt automated incident response processes.

Collecting Event Information

To get started, you need at least two types of data:

  • Continuous event collection and monitoring of your assets; and
  • Threat intelligence.

All of your assets — or at least the most important ones — should generate events to be monitored for anomalies. Ideally, these events are logged in a central location via a log collector with a relatively long retention time.

Setting up a good logging and monitoring framework can be a daunting task. The key is to start small and fully understand the content of the events that your assets generate. Be aware of what you’re missing, what’s not included in the event data and what cannot be logged by your assets. Then, slowly expand and start including other event sources and assets not already in scope. The National Institute of Standards and Technology (NIST) provided useful guidance for logging and monitoring in NIST Special Publication 800-137.

Sharing Threat Intelligence Data

You can receive and share threat data by deploying your own threat intelligence platform and connecting it to community-driven sources. Having early access to this type of information enables security teams to detect malicious actions as early as possible in the attack phase and adapt their defenses accordingly.

Although there’s more to threat intelligence than just indicators of compromise (IoCs), these often represent the first level of data that analysts use to detect an incident. For this reason, it’s important to develop and exercise a process for consuming and verifying IoCs.

Despite the event data from critical assets and the threat intelligence data applied to these events, breaches often go undetected for weeks, months or even years. The time from compromise to detection — also known as dwell time — is a major issue for many organizations, but there have been signs of improvement. The global median time from compromise to discovery was 99 days in 2016, but half of the respondents to the “2017 SANS Incident Response Survey” reported a dwell time of fewer than 24 hours.

Moving Toward Automated Incident Response

Fast detection is important, but so is prompt containment and incident response. According to the SANS survey, 53 percent of respondents reported a detection-to-containment time of less than 24 hours in 2017. This fast response time is good, but still leaves attackers with plenty of time to cause havoc or steal information. How can we improve this process and move further toward automated incident response?

There are several different approaches to orchestration and automation. One strategy is to collect all forensic data related to an alert and then present it in a summarized view to the analyst. This approach requires the analyst to decide what steps to take next. Another approach is to define follow-up actions after the data collection that can be deployed either automatically or with an analyst’s approval.

Automated Digital Forensic Acquisition

When an alert is triggered, it’s natural to want to collect additional information and event data from the host or environment that triggered these alerts. It’s often impossible to gain physical access to the system, especially if you are working in an environment with a lot of remote workers. The following tools can help you collect the necessary data:

  • Mozilla InvestiGator (MIG) — MIG is a cross-platform, agent-based solution that facilitates real-time collection of data and information from endpoints, including file and memory inspection. It is designed to be fast and asynchronous with a focus on privacy and security. Agents never send raw data back to the platform — they only reply to questions. All actions are signed by GNU Privacy Guard (GPG) keys that are not stored in the platform.
  • GRR Rapid Response — GRR is a cross-platform incident response framework focused on remote live forensics. Both the agent and client are written in Python. This tool enables fast and simple collection of hundreds of digital forensic artifacts.

Incident Response Orchestration

In addition to forensic investigation, you can also automate post-incident response processes. In most cases, these activities are conducted in response to not a single incident but a combination of events, ideally ones that are validated by different sources.

Open source tools such as TheHive can help analysts implement this approach. TheHive is an incident response platform that enables security teams to collaborate to improve the quality of their investigation. Analysts can work on different tasks and consolidate the results into a single case. The information in the tasks can be automatically enriched with data and context coming from external sources, such as VirusTotal and PhishTank.

TheHive seamlessly integrates with the Malware Information Sharing Platform (MISP) and analysts can use its Python API client to collect data from a security information and event management (SIEM) solution or phishing mailbox. If you like scripting, you can achieve quick wins with TheHive by adding your own analyzers to the underlying enrichment engine.

LogicHub also provides native deep analysis and correlation. It integrates with other security solutions to enrich data and add contextual information to each investigation. One of those integrations is VMRay, a malware detection and analysis solution.

The usefulness of such integrations is best demonstrated in the context of phishing alert triage. Let’s say that a suspected phishing message is sent for analysis to a dedicated mailbox. A manual evaluation of a phishing email would require an analyst to look at the originator data, the maliciousness of the URLs therein and the nature of the attachments. These steps take time and can become tedious, putting your analysts at risk of alert fatigue.

With an automated solution, LogicHub can pull the email from the mailbox and immediately investigate the email originator, including Sender Policy Framework (SPF) and Domain-Based Message Authentication, Reporting and Conformance (DMARC) checks. It can then extract the URLs found in the email body and query them against VirusTotal, OpenPhish and MISP. All of the attached files from the email are submitted to VMRay, which analyzes the files and returns the results back to LogicHub. The results from VMRay are correlated with the other sources of data enrichment previously collected and LogicHub calculates a combined score.

This whole process happens automatically and is presented to the analyst to review. Any changes made by the analyst are recorded and applied to future investigations. If the investigation turns up a real threat, LogicHub can kick off automated response and remediation actions with the analyst’s approval. In the case of phishing, the platform can automatically remove the malicious email from the mailboxes of all the impacted users.

Start Small to Reap Big Benefits

By collecting events from different data sources, enriching them automatically with context and information from threat intelligence feeds, conducting automatic attachment and URL analyses, and proceeding with automated response activities, security teams can vastly improve the quality and speed of incident investigations.

However, this type of integration cannot happen overnight. It’s important to start small, assess what works within your environment and then extend accordingly with additional sources.

Watch the Think 2018 live session: Intelligent Orchestration — The Future of Incident Response

More from Incident Response

3 recommendations for adopting generative AI for cyber defense

3 min read - In the past eighteen months, generative AI (gen AI) has gone from being the source of jaw-dropping demos to a top strategic priority in nearly every industry. A majority of CEOs report feeling under pressure to invest in gen AI. Product teams are now scrambling to build gen AI into their solutions and services. The EU and US are beginning to put new regulatory frameworks in place to manage AI risks.Amid all this commotion, hackers and other cybercriminals are hardly…

What we can learn from the best collegiate cyber defenders

3 min read - This year marked the 19th season of the National Collegiate Cyber Defense Competition (NCCDC). For those unfamiliar, CCDC is a competition that puts student teams in charge of managing IT for a fictitious company as the network is undergoing a fundamental transformation. This year the challenge involved a common scenario: a merger. Ten finalist teams were tasked with managing IT infrastructure during this migrational period and, as an added bonus, the networks were simultaneously attacked by a group of red…

Why security orchestration, automation and response (SOAR) is fundamental to a security platform

3 min read - Security teams today are facing increased challenges due to the remote and hybrid workforce expansion in the wake of COVID-19. Teams that were already struggling with too many tools and too much data are finding it even more difficult to collaborate and communicate as employees have moved to a virtual security operations center (SOC) model while addressing an increasing number of threats.  Disconnected teams accelerate the need for an open and connected platform approach to security . Adopting this type of…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today