One of the terms that is very current in the industry is security intelligence. There are many pseudo-definitions communicated to clients, but the true meaning of this term often remains vague.

As anyone who has worked with me professionally knows, I am a sucker for a good analogy — so here it goes: Cybersecurity is like being a chef. We deal with a lack of black and white: People judge on gradings, we follow trends and everyone takes advantage of new techniques. But why is being a chef anything like securing huge enterprises?

Cooking Up Proper Security

When a chef first starts developing a new menu, it’s a heavily iterative process beginning with visualizing and dreaming up something that works conceptually. Once this is complete, the chef then starts to create the dishes for the first time, sourcing top ingredients, perfecting the seasoning and determining the final presentation. It takes a lot of effort, with the individual pouring time, energy and money into the process.

For me, this is like the formative and early parts of rolling out a security operations center (SOC) or enterprise security program. We define the kinds of things we’d like and then start working out how to achieve them. Sometimes we overwhelm ourselves with noise and data, sometimes we start too small and leave ourselves with little to respond to.

The chef then goes about industrializing the process, dividing the tasks, labor and workload across the kitchen. Allowing the new menu to be handled by the whole kitchen is a classic Henry Ford division of labor. This is like the scaling of security. Security professionals weave the monitoring, processes and controls throughout the organization, ensuring it becomes an integral part of the business rather than an afterthought.

Finally, the localized expert chefs start to make their part of the process more efficient. Maybe they use common base sauces for multiple dishes or work out the temperatures for the ovens in which multiple dishes will cook. This is an iterative process: The more they cook the dishes in question, the more familiar they become with the process, and the easier it is to improve it.

How Cheffing Relates to Security Intelligence

The same can be said for security intelligence. Detection leads to triage and incident response, which allows you to learn lessons and improve detection in the future. Then the cycle starts anew.

What we learn from incidents should be fed back into our detection systems, so next time there is no triage required and vulnerabilities can be neutralized before they become full-blown threats. If we can’t neutralize them at the source, then we at least automate the response or put it in a very well-defined process to speed up the efficiency.

So why is it important to do the above? The key is to increase efficiency, so every time a threat is detected, we aren’t required to go through the full process of triage and response. We then save time — time that could be better spent responding to serious incidents or taking our security maturity to the next level. It creates a staircase of advancement and security intelligence.

The theoretical end goal is that we never have to fully respond to a similar incident more than once. Instead, we will have learned our lessons and enhanced our automated responses. The extra brain cycles can then be devoted to looking ahead rather than backwards and staving off emerging threats.

What Can Professionals Do?

So as a security executive, how can these things be implemented in a scalable, iterative and ongoing way? It must be included as a key part of the SOC metrics. Most people focus on SOC metrics being incidents responded to and remediated. But developing automated process or custom responses can be time-consuming. Opening up that time to value can allow us to see that if we put in some effort early on to work out these responses (sometimes called playbooks), the SOC metrics will naturally increase as these incidents are remediated in short time frames with little human intervention.

The other key part is to have dedicated resources to deal with increasing maturity. This creates the staircase model. If every week we playbook or automate five incidents, the next week we have another five to automate, and within a month there are 20 incident types either closed at the source or playbooked to allow a quick and predictable response. All of these points give us a model that is constantly evolving to battle advanced threats. Relevant to the incidents we see, this is a much better approach then trying to build 50 responses before we even start.

Playbooks should be integrated with existing business processes that other areas of the enterprise own, such as simple malware cleanses from workstations (usually owned by IT or Desktop teams). This is supplemented with a preset playbook so there are no obstacles with regard to the owners involved, and the incidents raised can be closed off without ad hoc communication each time. As the maturity increases, staff can spend more time on the really serious incidents that could have a critical effect on our organizations, along with doing proactive work to look for emerging threats.

As in a Michelin-starred restaurant, the division of labor and the local efficiencies of specialists can allow us to increase our sophistication. As we drive down strategy and innovation from the head chef, each part of the process has its own responsibility to create efficiencies and increase its capabilities.

more from Intelligence & Analytics

IBM to Acquire Randori, Transforming How Clients Manage Risk with Attack Surface Management

Organizations today are faced with defending a complex technology landscape — with cyberattacks targeted at constantly changing cloud, distributed, and on-premises environments. Often escaping security scans and periodic assessments, these changes represent windows of opportunities for attackers looking to bypass defenses. While there always have — and always will be — unknown risks, having a […]