Unless you’ve been hiding under a rock, you’ll be well aware of the recent critical vulnerability discovered in the GNU C Library, a core component of the vast majority of Linux distributions. The vulnerable function was used in many thousands of Linux applications across potentially millions of devices, including servers.
A patch was released to fix the vulnerability in short order, and up-to-date Linux servers are safe, many requiring nothing more than a simple YUM or APT command to remove the problem.
The same cannot be so easily said of the container technology that has become increasingly popular over the last couple of years. Many containers are likely to still be at risk because of the messy complexity of the container ecosystem, the difficulty of updating containers in place, and because many users of containers aren’t entirely clear what software they are running.
No one doubts that containers are a substantially beneficial development for enterprise application hosting. They’re a great option for companies who use cloud servers or dedicated servers as the underlying infrastructure layer of their application hosting environment. Each container is a self-contained environment that holds everything needed to run a specific application or service. They’re eminently portable and can easily be deployed on systems ranging from a developer’s laptop to a production server. And they’re predictable: because containers carry a full — although minimal — operating system environment with them, code that runs in testing and development will run just as well in production.
The fly in the ointment is that although in theory containers are easy to update — one can simply destroy an old container and build a new one with the patched software — in practice it’s not so easy, as has been pointed out by Red Hat’s Gunnar Hellekson and Josh Bressers:
“As patches are being delivered by Linux vendors and community distributions, there’s one glaring issue at play: Who’s fixing containers?”
The flexibility of containers is a double-edged sword. Many enterprise container users build images from publicly maintained image repositories. The speed at which the images in repositories is updated varies considerably and for a company with a significant number of containers deployed, it can be hard to keep track of which containers are running which software versions. Updating containers isn’t as simple as running a quick update command; many — by design — don’t even expose an SSH port that system administrators can log in to. The containers have to be rebuilt, and if the images and scripts they are based on aren’t updated, there’s not much a user can do about it.
The effect this has in the real world is clearly exemplified in this question from Stack Exchange user mc0e:
Some third party containers are poorly maintained, and likely not to get rebuilt any time soon. We have local containers that need work to get re-builds done, often not more complicated than rebasing them on a newer upstream base container. The speed of those upstream containers getting rebuilt will be quite variable. If I rebuild everything locally now, I’ll be too early to get the fix in some cases. In general the ‘everything’s just a container’ promise of docker administration doesn’t hold here.
The lesson here is not that there is anything wrong with running container-based infrastructure, but that the ecosystem surrounding the technology needs to improve so that users are afforded some insight into the state of their containers and some guarantee that the software in their containers can be updated in a timely fashion.