Containers — which are lightweight software packages that include entire runtime environments — have solved the issues of portability, compatibility and rapid, controlled deployment. Containers include an application; all its dependencies, libraries and other binaries; and configuration files needed to run them.

Heralding the era of microservices, Infrastructure as Code and service-oriented architectures (SOA), containers are supported by many of the most popular cloud infrastructures and power most mobile applications today.

In less than a decade, the application isolation concept known as containers has surged to the very forefront of cutting-edge computing technology, riding the wave of cloud computing. In the same way that x86 virtualization technology transformed data center computing, containers have redefined the best-of-breed approaches for delivering application performance at scale.

Defining the Problem

Just like any computing system, containers are made of software components, any of which can contain flaws and vulnerabilities. Vulnerability management for containers is the process of identifying, prioritizing and fixing vulnerabilities that could potentially expose containers. Exposures can easily include the other systems, applications and data to which containers are connected. Defects in these components could allow an attacker to gain control of a system and supply access to sensitive data, resulting in financial loss, reputational damage and system outages.

As the popularity of container technology increases, so does the importance of detecting and remediating vulnerabilities in the code used to create, operate and manage them.

Learn more about containers

Challenges

Where to Detect Vulnerabilities

A typical organization making use of containers follows a development pipeline that logically progresses from planning, code creation, revision and building to testing, releasing, deploying into the production world and, ultimately, steady-state operations. Each of these phases introduces an opportunity to detect and correct software vulnerabilities.

At what phase is it best to detect vulnerabilities? How can this be done while minimizing disruptions to the development cycle? What tools are best suited for the task? Many of these questions are impacted by the tools already in use by an organization. The decision to use open source or commercial tools can affect an organization’s vulnerability management strategy as well.

Shifting Left

A term gaining in popularity as companies move from a traditional waterfall development model to an agile methodology is “shifting left.” In a typical development pipeline, the phases are read left to right:

Plan —> Code —> Build —> Test —> Release —> Deploy —> Operate

In the world of vulnerability management, shifting left means a discrete attempt to implement scanning for vulnerabilities closer to the beginning of the Software Development Life Cycle (SDLC) or development pipeline.

Strategies for Container Vulnerability Management

Beyond simply shifting left, there are multiple apt opportunities to detect vulnerabilities in a container environment.

CI/CD Pipeline Scanning

The Continuous Integration/Continuous Development pipeline is a specific phase of active development when code is being created, reviewed and tested. Tools like Jenkins, GitLab and Bamboo are popular for automating the workflows involved with building software modules. This is the best place to perform vulnerability scanning because issues are more quickly identified, remediated and retested at low cost. Several popular vulnerability management tools can integrate with workflow automation systems.

Registry Scanning

The registry is a repository (or collection of repositories) used to store container images, which are the templates used to deploy multiple individual instances of running containers. A major component of container orchestration involves instantiating containers from a registry into a production computing environment.

Because this is such a commonplace and integral component, almost all popular vulnerability scanning tools can be configured to scan images as they reside in a registry. This method for identifying container vulnerabilities is probably the lowest-cost, highest value way to find and fix security issues.

By pinpointing vulnerabilities in your container images, you can fix defects that potentially reside in dozens or hundreds of running container instances. Through the magic of orchestration, old containers can be destroyed and replaced seamlessly with updated versions.

Runtime Environment Scanning

Scanning running containers for vulnerabilities is closest to the traditional method for finding flaws and has been used in information security for decades. This approach simply means executing a vulnerability scan against a running container to highlight defects. The nature of containers, however, is that you don’t fix what is found — you remediate the container image in your development pipeline and replace the faulty instance. This can be a relatively expensive proposition in terms of time, if not cost. That said, this approach is the best way to detect “rogue” containers that have not been properly deployed.

Master Node (Host) Scanning

It is important to continue scanning the supporting infrastructure in your containerized environment — that means, the hosts and virtual machines that support your container runtime (Containerd, LXC, CRI-O, etc.). The good news is, many tools that provide support for integration into popular CI/CD tools, registries and runtimes are offered by traditional vulnerability management vendors, providing a one-stop shop of comprehensive capabilities.

Key Principles for Container Security

Utilizing containers does not alleviate the need for methodically identifying and fixing software vulnerabilities. Containers — although immutable — are made of complex layers, each with potential vulnerabilities and security challenges.

There are multiple opportunities in the development pipeline to scan for and highlight these vulnerabilities. Organizations have a wide array of products from which to choose. It’s important to understand the capabilities of these various tools, which can vary and can be offered as either open source or commercial flavors. As organizations go about selecting a strategy and products for vulnerability management, some key principles should be considered:

  • Build containers using minimal base images or “distroless” images from trusted sources, keeping in mind that some container scanning tools have problems with distributions that do not have a package manager.
  • Maintain all the tools, packages and libraries that you add to images.
  • Carefully select a vulnerability scanning tool suited to your organization’s processes, DevOps practices, ecosystem and capabilities.
  • Plan to implement vulnerability scanning at every phase of the container development pipeline, being mindful of compliance requirements.
  • Consider additional controls, such as Staging Registries, Kubernetes Admission Controllers, image signing, multi-stage builds, etc.
  • Remember that compliance and demonstrable controls remain important considerations for any computing environment — containers are no exception.

To learn more about vulnerability management for containers, watch this presentation.

To learn more about X-Force Red’s cloud testing and vulnerability management services, visit our site here.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today