Containers — which are lightweight software packages that include entire runtime environments — have solved the issues of portability, compatibility and rapid, controlled deployment. Containers include an application; all its dependencies, libraries and other binaries; and configuration files needed to run them.

Heralding the era of microservices, Infrastructure as Code and service-oriented architectures (SOA), containers are supported by many of the most popular cloud infrastructures and power most mobile applications today.

In less than a decade, the application isolation concept known as containers has surged to the very forefront of cutting-edge computing technology, riding the wave of cloud computing. In the same way that x86 virtualization technology transformed data center computing, containers have redefined the best-of-breed approaches for delivering application performance at scale.

Defining the Problem

Just like any computing system, containers are made of software components, any of which can contain flaws and vulnerabilities. Vulnerability management for containers is the process of identifying, prioritizing and fixing vulnerabilities that could potentially expose containers. Exposures can easily include the other systems, applications and data to which containers are connected. Defects in these components could allow an attacker to gain control of a system and supply access to sensitive data, resulting in financial loss, reputational damage and system outages.

As the popularity of container technology increases, so does the importance of detecting and remediating vulnerabilities in the code used to create, operate and manage them.

Learn more about containers

Challenges

Where to Detect Vulnerabilities

A typical organization making use of containers follows a development pipeline that logically progresses from planning, code creation, revision and building to testing, releasing, deploying into the production world and, ultimately, steady-state operations. Each of these phases introduces an opportunity to detect and correct software vulnerabilities.

At what phase is it best to detect vulnerabilities? How can this be done while minimizing disruptions to the development cycle? What tools are best suited for the task? Many of these questions are impacted by the tools already in use by an organization. The decision to use open source or commercial tools can affect an organization’s vulnerability management strategy as well.

Shifting Left

A term gaining in popularity as companies move from a traditional waterfall development model to an agile methodology is “shifting left.” In a typical development pipeline, the phases are read left to right:

Plan —> Code —> Build —> Test —> Release —> Deploy —> Operate

In the world of vulnerability management, shifting left means a discrete attempt to implement scanning for vulnerabilities closer to the beginning of the Software Development Life Cycle (SDLC) or development pipeline.

Strategies for Container Vulnerability Management

Beyond simply shifting left, there are multiple apt opportunities to detect vulnerabilities in a container environment.

CI/CD Pipeline Scanning

The Continuous Integration/Continuous Development pipeline is a specific phase of active development when code is being created, reviewed and tested. Tools like Jenkins, GitLab and Bamboo are popular for automating the workflows involved with building software modules. This is the best place to perform vulnerability scanning because issues are more quickly identified, remediated and retested at low cost. Several popular vulnerability management tools can integrate with workflow automation systems.

Registry Scanning

The registry is a repository (or collection of repositories) used to store container images, which are the templates used to deploy multiple individual instances of running containers. A major component of container orchestration involves instantiating containers from a registry into a production computing environment.

Because this is such a commonplace and integral component, almost all popular vulnerability scanning tools can be configured to scan images as they reside in a registry. This method for identifying container vulnerabilities is probably the lowest-cost, highest value way to find and fix security issues.

By pinpointing vulnerabilities in your container images, you can fix defects that potentially reside in dozens or hundreds of running container instances. Through the magic of orchestration, old containers can be destroyed and replaced seamlessly with updated versions.

Runtime Environment Scanning

Scanning running containers for vulnerabilities is closest to the traditional method for finding flaws and has been used in information security for decades. This approach simply means executing a vulnerability scan against a running container to highlight defects. The nature of containers, however, is that you don’t fix what is found — you remediate the container image in your development pipeline and replace the faulty instance. This can be a relatively expensive proposition in terms of time, if not cost. That said, this approach is the best way to detect “rogue” containers that have not been properly deployed.

Master Node (Host) Scanning

It is important to continue scanning the supporting infrastructure in your containerized environment — that means, the hosts and virtual machines that support your container runtime (Containerd, LXC, CRI-O, etc.). The good news is, many tools that provide support for integration into popular CI/CD tools, registries and runtimes are offered by traditional vulnerability management vendors, providing a one-stop shop of comprehensive capabilities.

Key Principles for Container Security

Utilizing containers does not alleviate the need for methodically identifying and fixing software vulnerabilities. Containers — although immutable — are made of complex layers, each with potential vulnerabilities and security challenges.

There are multiple opportunities in the development pipeline to scan for and highlight these vulnerabilities. Organizations have a wide array of products from which to choose. It’s important to understand the capabilities of these various tools, which can vary and can be offered as either open source or commercial flavors. As organizations go about selecting a strategy and products for vulnerability management, some key principles should be considered:

  • Build containers using minimal base images or “distroless” images from trusted sources, keeping in mind that some container scanning tools have problems with distributions that do not have a package manager.
  • Maintain all the tools, packages and libraries that you add to images.
  • Carefully select a vulnerability scanning tool suited to your organization’s processes, DevOps practices, ecosystem and capabilities.
  • Plan to implement vulnerability scanning at every phase of the container development pipeline, being mindful of compliance requirements.
  • Consider additional controls, such as Staging Registries, Kubernetes Admission Controllers, image signing, multi-stage builds, etc.
  • Remember that compliance and demonstrable controls remain important considerations for any computing environment — containers are no exception.

To learn more about vulnerability management for containers, watch this presentation.

To learn more about X-Force Red’s cloud testing and vulnerability management services, visit our site here.

More from Application Security

Does Follina Mean It’s Time to Abandon Microsoft Office?

As a freelance writer, I spend most of my day working in Microsoft Word. Then, I send drafts to clients and companies across the globe. So, news of the newly discovered Microsoft Office vulnerability made me concerned about the possibility of accidentally spreading malware to my clients. I take extra precautions to ensure that I’m not introducing risk to my clients. Still, using Microsoft Office was something I did many times a day without a second thought. I brought up…

3 Reasons Why Technology Integration Matters

As John Donne once wrote, “No man is an island entire of itself.” With digitalization bridging any distance, the same logic could be applied to tech. Threat actors have vast underground forums for sharing their intelligence, while security professionals remain tight-lipped in a lot of data breach cases. Much like the way a vaccine can help stop the spread of infectious diseases, sharing threat intelligence and defense strategies can help to establish a more secure future for everyone.  So what…

Why Your Success Depends on Your IAM Capability

It’s truly universal: if you require your workforce, customers, patients, citizens, constituents, students, teachers… anyone, to register before digitally accessing information or buying goods or services, you are enabling that interaction with identity and access management (IAM). Many IAM vendors talk about how IAM solutions can be an enabler for productivity, about the return on investment (ROI) that can be achieved after successfully rolling out an identity strategy. They all talk about reduction in friction, improving users' perception of the…

Controlling the Source: Abusing Source Code Management Systems

For full details on this research, see the X-Force Red whitepaper “Controlling the Source: Abusing Source Code Management Systems”. This material is also being presented at Black Hat USA 2022. Source Code Management (SCM) systems play a vital role within organizations and have been an afterthought in terms of defenses compared to other critical enterprise systems such as Active Directory. SCM systems are used in the majority of organizations to manage source code and integrate with other systems within the…