middleware, to application code, all combined into a single image. Therefore, the way the that a container runs in development is the same way it runs in Quality Assurance (QA) and production, resulting in running the application without hassle when the application is moved from one environment into other, which can help in fast deployment of applications.

This is the first installment in a two-part series about application containers. Continue on to part two to learn how to apply security best practices to application containers.

Think about a revolutionary invention. What’s the first thing that comes to mind?

Maybe it’s the advent of the internet, or perhaps your brain skipped all the way back to the steam engine. When asked that question, how many people do you think would land on shipping containers? They might not be the first thing that comes to mind, but the invention of shipping containers in the 1950s catalyzed change. Introducing a standard container helped pave the way for faster, cheaper and more reliable transportation of goods across the globe.

In many ways parallel to how physical containers shaped shipping, application containers are revolutionizing software development methods. Much like physical containers, application containers are a form of digital packaging. They rely on that attribute to provide virtual isolation for deploying or running various applications that use the same operating system (OS) or cloud.

Containers support a microservice-based architecture, an approach to redefining large-scale software projects to be more scalable and modular. Container technology can also help make it easier to run applications in different working environments under different conditions because it provides a solid runtime environment. Combined with the open source wave that has permeated the industry, this new wave of development has been a boon to cloud providers, developers and managed services alike.

Nowadays, many organizations are shifting to container-based technology because it gives them the flexibility to deploy independently developed software modules more quickly than ever. It’s also no coincidence that containerization is evolving alongside the adoption of the hybrid cloud and the development of cloud-native applications. The future of application life cycle management will likely be more open-source, container-driven and Kubernetes-orchestrated.

Making the Technical Case for Containerization

Organizations have been increasingly adopting container-based technology because it provides unprecedented portability that enables them to move applications across different platforms and environments and run them more smoothly.

When applications move from a developer’s machine into a staging environment, from a staging environment to a production environment or from a physical machine to a virtual machine (VM), incompatibility issues often arise. For example, let’s say you tested the same software in staging with Python version 2.7, but you’re using Python version 3.0 in production. In that scenario, you could face several incompatibility issues in running the application once it is moved to production — a problem that is crucial to avoid from a business continuity standpoint.

So why not VMs? We know VMs can include a complete OS replete with drivers, binaries and libraries, and even actual applications. Each OS sits on top of a hypervisor that controls physical server hardware. However, one known problem with the VM approach is that it can overuse server memory and impact efficiency.

Containerization, on the other hand, represents a much more streamlined approach to DevOps, which means that all the related servers can be updated at once by moving changes to each system. Containerization can help reduce wasted resources because each container only holds the application it manages and its related binaries or libraries.

Furthermore, containerization can provide a runtime environment, required dependencies, libraries, other binaries and configuration files needed to run it, all bundled into a single package. By containerizing an application’s platform and its dependencies, differences in OS distribution and underlying infrastructure are also abstracted away.

Containers can provide multiple benefits, including:

  • Isolating applications from each other;
  • Isolating applications from the host;
  • Improving the security of applications by restricting their capabilities; and
  • Encouraging adoption of the principle of least privilege.

To realize the potential of containers, it’s critical to adhere to standards and best practices. If we look at the history of containerization and its applications today, it’s clear that container security needs to play a much stronger role in the future to enable the agility offered by container technologies without compromising the security of the applications they help manage.

A Brief History of Containerization

The concept of containerization was introduced back in 1979 with the development of chroot in 1979. Chroot, the process of creating a virtualized environment in a Unix operating system, was added in version 7 of Unix. Chroot marked the beginning of container-style process isolation by restricting an application’s file access to a specific directory — the root — and its children. A key benefit of chroot separation was improved system security, such that an isolated environment could not compromise external systems if an internal vulnerability was exploited.

It’s All About Isolation

Process Containers, launched by Google in 2006, went into a more granular isolation by containing processes and not only applications. Process containers are designed to limit, account and isolate the resource usage — think central processing unit (CPU), memory, disk input/output (I/O), network, etc. — of a collection of processes. They were renamed Control Groups (aka cgroups) a year later.

Cgroups entered the containerization domain to control the relationships between processes and reined in users’ access to specific activity and memory volumes. The cgroup concept was introduced with the purpose of adding even more isolation to keep processes separate from one another. Cgroups were absorbed into the Linux kernel in January 2008, Linux kernel v2.6.24, after which the Linux container technology LXC emerged.

With containerization, isolation is the name of the game. To add another aspect of isolation, Linux Namespaces, a feature of the Linux kernel that partitions kernel resources, came along to isolate global system resources between independent processes. Namespaces provide the basis for container network security, which is used to hide one user or group’s activity from others on the same network or asset.

Containerization Hits Dev Mainstream

Container technology ramped up in 2017 when companies such as Pivotal, Rancher, AWS and Docker changed gears to support the open-source Kubernetes container scheduler and orchestration tool. By doing that, they cemented the tool’s position as the default container orchestration technology, making it one of the most popular development tools used today.

To help streamline the use of Kubernetes, Microsoft enabled organizations to run Linux containers on Windows Server, which was a major development for Microsoft shops that wanted to containerize applications yet remain compatible with their existing systems.

Get Ahead of Container Security Challenges

Containerization has the potential to improve overall productivity in an organization and help speed up the software delivery process. Containers play a pivotal role in the success of DevOps, since container images serve as templates for the full IT stack, from the OS to middleware to application code, all combined into a single image. Therefore, the way a container runs in development is the same way it runs in quality assurance (QA) and production, resulting in running the application without hassle when the application is moved from one environment into another, which can help in fast deployment of applications.

As with any new technology, containerization has potential benefits to your business and comes with unique security challenges that your organization should also be aware of. Let’s look at some of the big-picture container security considerations that organizations should study before moving to containerization at scale.

For starters, the way development teams worked before containers became the new norm is on VMs. That was somewhat safer because, when it comes to security in VMs, a hypervisor runs directly on hardware and cannot see what is running inside the VM. Therefore, the attack surface is much smaller on VMs when compared to a container-based environment.

With containers, the attack surface is larger because all the application containers share the same host OS kernel. Hence, root access on the host could allow an attacker to access all the containers and see what is running inside the ones they are not authorized to reach.

Containers are also more vulnerable to OS attacks due to the larger attack surface associated with the OS system call interface compared to the much smaller interface between a VM and a hypervisor. Vulnerabilities in system calls can allow potential access to the kernel. Such privileged access could be a source of compromise of the entire asset where the containers are hosted.

But security issues can be addressed during the design phase to enable teams to benefit from containerization. By incorporating security best practices into containerization from the onset, you and your development team will be empowered to deploy software applications more quickly and with less overhead.

More from Application Security

What’s up India? PixPirate is back and spreading via WhatsApp

8 min read - This blog post is the continuation of a previous blog regarding PixPirate malware. If you haven’t read the initial post, please take a couple of minutes to get caught up before diving into this content. PixPirate malware consists of two components: a downloader application and a droppee application, and both are custom-made and operated by the same fraudster group. Although the traditional role of a downloader is to install the droppee on the victim device, with PixPirate, the downloader also…

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today