The other day I was reading a history of the events leading up to the Challenger space shuttle disaster, which got me thinking of the ways different industries manage risk. In that tragic case, the design of the O-ring seals in the right rocket booster presented a known risk that nontechnical executives downplayed and did not fully comprehend when they made the decision to move forward with the launch.

Similarly, the security industry contends with a range of cyber risks that can cause catastrophic damage to a business, such as large-scale disclosure of personal data, failure of power infrastructure caused by rogue threat actors and the interruption of critical emergency service systems.

Having worked with many clients in various industries over the years, I have observed myriad approaches to risk management. But the fact remains that many organizations are still immature in this area because best practices are not typically shared across industries. Organizations are often wed to their method of managing risk and do not look outside for ways to improve.

Assurance and Traceability Are Key

In the 1990s — around the time when I first completed a security evaluation of the Advanced Interactive eXecutive (AIX) operating system — the Information Technology Security Evaluation Criteria (ITSEC) was considered a best practice in the U.K. Today, we have the Common Criteria for Information Technology Security Evaluation (CC) and the companion Common Methodology for Information Technology Security Evaluation (CEM).

The security evaluation process considers the functionality of security controls and the assurance of those controls. Depending on what the solution protects, there is a requirement for increased levels of assurance through additional documentation and testing — with an associated cost.

To provide assurance, security requirements were traced from the initial requirements through the different levels of design to testing in a traceability matrix. Outside the public sector, the architectural thinking process used today uses some traceability, but without rigor or consideration of the differing levels of detail required depending on the risk to the business.

Today, the “NASA Systems Engineering Handbook” highlights the need for bidirectional traceability of requirements in solutions.

The Difference Between Verification and Validation

In pharmaceuticals, there is the concept of verification and validation of a solution. Verification means ensuring that the solution is built according to the requirements and design. Traceability supports this principle — together with reviews of the solution — to ensure that functionality has been implemented and will be tested.

Validation means ensuring the solution meets users’ needs. In security, it’s not just testing that the product will enforce the control, but making sure the users’ needs are met within the environment where it is being used. Having a security control that requires a user to log in every 30 minutes may improve security, but if the user takes 20 minutes to log in through another three logins and can only perform 10 minutes of productive work, it does not meet the needs of the user or the business.

Today, NASA uses the verification and validation approach in its Systems Engineering Handbook, and I am sure other industries can make use of these principles.

Minimize Risk With a Layered Defense Strategy

Financial and banking institutions are increasingly adopting approaches to risk management that outline three lines of defense to ensure ownership, oversight and governance. The second line of defense looks at the overall aggregate risk for the organization. In the case of the space shuttle program, the challenge was how to effectively communicate that the material risks could be catastrophic.

Originating from the oil and gas industry in the 90s, military and aviation have adopted the use of barrier risk models to visualize risk, such as the bowtie model. We know that incidents will happen, so it’s important to pay the same level of attention to the preparation and prevention controls as the detection, response and recovery controls. At the center of the bowtie model is the catastrophic event that may happen, with the controls preventing the event from happening on the left, and the controls that contain the consequences of the event on the right.

Combining the five stages of the NIST Cybersecurity Framework with the bowtie model is a great way to represent the depth and strength of security controls to employees with a less technical background. It also allows engineering staff to better demonstrate that additional controls are not required when the current security controls are appropriate to the risk.

There are many different ways to represent the controls. Below is an example for data-at-rest encryption:

How Strong and Mature Are Your Security Controls?

Each of the controls an organization uses can have a different strength of mechanism. If I use the six-character password “123456,” it is very weak compared to one that is enforced by software when a password is changed. A single strong control is better than many controls that have a low strength of mechanism.

The context of how a security mechanism is implemented or deployed may also alter its strength. Using a large encryption key may be weakened by the randomness of the key, and inspecting a TLS session may weaken the effectiveness of encryption. Think about the context of the implementation.

Each control may also have a different level of maturity. If I use a firewall that has been installed without a formal design, without testing and with no documented procedures to manage the life cycle, the maturity is low with an increased likelihood that the controls will be inadequate. Having one very mature process that is enforced rigorously may be better than having many controls that are poorly maintained. Using the Capability Maturity Model Integration (CMMI) can help organizations assess the maturity of a process. Without the right balance of procedural, organizational and technical controls, the maturity may not be adequate.

Tips for Managing Cyber Risks

The next time you have a risk that is considered material to the operation of your business — especially one that could result in a catastrophic incident — consider what you can learn from how other industries manage risk. Below are some best practices for managing cyber risks:

  • Ensure traceability of controls with assurance appropriate to the risk.
  • Consider both verification and validation in the assurance of a solution.
  • Use multiple levels of risk review with three lines of defense.
  • Examine defense in depth with an appropriate strength of mechanism.
  • Assess and drive continuous improvement in the maturity of control mechanisms.

Last, but certainly not least, make sure you communicate these principles to staff and suppliers to get them on board and garner their support in managing risk effectively.

What is your industry’s primary security challenge?

More from Risk Management

Cybersecurity 101: What is Attack Surface Management?

There were over 4,100 publicly disclosed data breaches in 2022, exposing about 22 billion records. Criminals can use stolen data for identity theft, financial fraud or to launch ransomware attacks. While these threats loom large on the horizon, attack surface management (ASM) seeks to combat them.ASM is a cybersecurity approach that continuously monitors an organization’s IT infrastructure to identify and remediate potential points of attack. Here’s how it can give your organization an edge.Understanding Attack Surface ManagementHere are some key…

Six Ways to Secure Your Organization on a Smaller Budget

My LinkedIn feed has been filled with connections announcing they have been laid off and are looking for work. While it seems that no industry has been spared from uncertainty, my feed suggests tech has been hit the hardest. Headlines confirm my anecdotal experience. Many companies must now protect their systems from more sophisticated threats with fewer resources — both human and technical. Cobalt’s 2022 The State of Pentesting Report found that 90% of short-staffed teams are struggling to monitor for…

Container Drift: Where Age isn’t Just a Number

Container orchestration frameworks like Kubernetes have brought about untold technological advances over the past decade. However, they have also enabled new attack vectors for bad actors to leverage. Before safely deploying an application, you must answer the following questions: How long should a container live? Does the container need to write any files during runtime? Determining the container’s lifetime and the context in which it runs is critical, especially when hosting an internet-facing service. What is Container Drift? When deploying…

OneNote, Many Problems? The New Phishing Framework

There are plenty of phish in the digital sea, and attackers are constantly looking for new bait that helps them bypass security perimeters and land in user inboxes. Their newest hook? OneNote documents. First noticed in December 2022, this phishing framework has seen success in fooling multiple antivirus (AV) tools by using .one file extensions, and January 2023 saw an attack uptick as compromises continued. While this novel notes approach will eventually be phased out as phishing defenses catch up,…