an electronic lock

Securing Containers

By: Josiah Huckins - 5/13/2022
minute read


I use containers kind of...a lot.

The container operating model has become the defacto standard for most headless applications I manage, replacing virtual machines.

Maintaining a lean, bare minimum app build with all dependencies has never been easier. Deploying that build at scale, across numerous disparate environments is also simple.
With such scalability, the importance of security cannot be overstated. The last thing you want to do is roll out a build with exploits, or expose PII data in the clear. A hardened container based on a trusted image is the only way to fly.

Image Sourcing

Image Scanning

Container Privileges and Resource Allowance

A Few Important Notes on Namespacing


Image Sourcing

Getting right into it, the first move toward a secure containerized solution is to obtain the base image from a trusted registry. Chances are, you're not building a container's runtime environment from scratch. All of my images are based on some flavor of linux image. Whether you use an internal registry server or a service like Docker Hub, you need a means to ensure the integrity of your base image.

Docker Hub provides a means to ensure the image you pull consistently comes from the same reputable source. Enter Docker Content Trust (DCT).
DCT helps guarantee integrity by providing image publishers a means to sign image tags. Each hub repository is provisioned a set of keys, used to sign their image tags. Signing images requires the registry containing them to have a "notary" associated with it, with Docker Hub being one such notary. The private key is generated or loaded into the image creators local trust repository, and the notary server stores a public key. Image creators can sign via their private key. Image consumers can verify the integrity of the signed content via the docker trust inspect command syntax. Checkout the official documentation for details.

One thing to be careful with here is that this process establishes root of trust for a publisher's activities, but it doesn't verify the publisher. With Docker CLI being open source, anyone can generate a private/public key pair and have it added to the relevant client and notary trust stores to sign their own images. Always strive to obtain base images from Docker verified publishers. Using DCT with verified publishers provides assurance that future updates to signed images have truly come from your verified source.


Image Scanning

You can and should be scanning your images with any deployment or upgrade (even upgrade of the base container). Scanning helps detect any "Common Vulnerabilities and Exposures" (CVEs) in your image, Dockerfile and dependency libraries. There are two types of scanning, local scanning and hub scanning.

Local scanning uses the Docker scan plugin. Before you begin, its important to always update either Docker Desktop (which includes this plugin) or update to the latest version via your package manager. This is necessary to ensure the plugin always scans for the latest vulnerabilities.
You can scan images in a CLI like so: docker scan some-image-name
More details can be found in the docs. Note the option to check the dependency tree via the --dependency-tree flag. This lists all dependencies for the image, a useful feature when trying to pinpoint where a vulnerable library might be coming from.

Hub scanning takes place within Docker Hub. This is a service provided with one of Docker's Pro, Team or Business plans. When active, pushes to your image repository trigger a scan. Hub scanning is equivalent to local scanning in terms of the underlying mechanism (Snyk) and provides vulnerability reports with the image repository.


Container Privileges and Resource Allowance

a cargo ship with containers

With the source of your images and future state changes to them considered, let's move on to aspects of securing running containers. First, its encouraged to run containers as a non-root user. Running as non-root limits potential access to system APIs in the host kernel.

Containers should always have resource limits defined. Docker provides runtime options to limit memory consumption via the -m or --memory= option, you can also set a mem_limit in a docker compose file. This is particularly important for garbage collection enabled Java or C# apps. Of less importance in my opinion, but still worth examining is the enabling and tuning of swap. In most cases its best to avoid using this by providing adequate virtual memory for the container to do its work. Plus, paged memory may need added security control if your app works with sensitive data (faster access in memory cache). Per the docs, "If --memory and --memory-swap are set to the same value, this prevents containers from using any swap".

There are also advanced options available to tune how many CPU cores a container is allowed to consume, or define the type of CPU scheduler the container should use. Special caution should be taken when tuning the scheduler, too much processor time for a container can lead to monopolizing available processing cycles when you run multiple containers. A DoS scenario is possible with a container that's over allocated for realtime work. For most container applications, the default CFS scheduler is sufficient. CPU core usage limits are a viable setting for any shared setup with multiple containers. Just as before, be sure to check out details in the docs.

When trying to secure container access, understand that while it behaves like a virtual machine, its not the same. VMs virtualize hardware but have their own operating system with its specific kernel and user space programs. Containers contain similar features of an OS, but use the underlying host kernel and its resources. Containers only see the binaries and tools purposely added in their namespace.


A Few Important Notes on Namespacing

On the subject of namespaces, there are various types of purpose specific namespaces in a modern linux kernel. These are utilized to sandbox many aspects of a container, including user/group IDs, IPC, network stacks, mount points and processes. One of the most important namespaces related to security is the user namespace. I mentioned earlier that the process for running containers should be a non-root user, but what about the user within the container itself?

This user is root, on purpose, to allow for any custom configuration or installation of packages in the container. The problem? It's possible to map host paths to container paths when running a container, perhaps for the seemingly innocent purpose of sharing some resouces from the host. Without user namespaces, this means the root user in the container can read and modify files in the mapped path and the changes are applied to the host path! Container escaping anyone?

The solution to this problem is to employ user namespaces. Supported with Docker version 1.10, these allow the host to map its uid and gid to another uid and gid specific to a container. This prevents the container root user from modifying the underlying host files. Here's how to set it up:

Enable user namespace remapping by adding the following in your /etc/docker/daemon.json: { "userns-remap": "default" }
(The default user for this namespace remapping is "dockremap", but its possible to specify a custom one.)

Note, if not using a daemon.json, add the --userns-remap=default flag in your /etc/init.d/docker file after the "daemon" entry: /usr/local/bin/docker daemon --userns-remap=default


Closing Thoughts

The lightweight, portable, and pruned design of containers make them an ideal choice for cloud hosted applications. As we've seen, they require a number of configurations to provide the best security posture. As support in the Linux kernel and Docker engine grows, its important to keep up with vulnerabilities and mitigations for them.

One aspect of security that we did not cover is monitoring. It goes without saying, you should be monitoring your container resources, response time, access and modifications daily. Keep an eye on any API calls to the kernel and external systems. I may cover this in detail in another post. For now, thanks for reading!


Comments