Sunday 25 September 2022

Why I'm (mostly) not using docker

I'm somewhat cautious of docker. Rather than reposting the same stuff on Reddit, I thought it would be quicker to list the reasons here and then just post the URL when it comes up.

I'm running a few hundred LXCs at $WORK. It's a really cheap way to provide a computing environment. And it works. But I'm more cautions about docker. Docker is not supported as a native container provider on Proxmox - which is where most of my VMs and LXCs now live - but that really has very little bearing on my concerns. I do have VMs running docker - more on that later.

The first problem is that its designed for running appliances. Some software fits very well into this model - but such software is usually edge case. For databases I do not want lots of layers of abstraction between the run time and the storage. For routers/firewall I want the interfaces to be under direct control of the host. For application and webservers I want to be able to interrogate memory and cpu usage on a per-process basis. Working on docker containers feels like key-hole surgery. It might be very hi-tech but its awkward and limiting. Conversely, I can have a (nearly) fully functional lxc host with very little overhead.

For a lot of people out there, the idea that you can just click a couple of links and have a service available for use sounds great. And it is. I've downloaded stuff from docker hub to try out myself. But I wouldn't run it in production. The stuff I do run in production has a well defined provenance - it has either come from the official debian/ubuntu repos or from the people who wrote the software. In the case of the latter, there are processes in place to check if the software needs updated. Conversely a docker container is built up of multiple layers, sourced from different teams/developers, most of whom are repackaging software written by someone else. In addition to the issue of sourcing software securely, the layers of packagers may also add capabilities to the container. It really might not be as isolated from the host as you think.

This lack of accountability is a growing concern - indeed Chainguard have released a Linux distribution specifically to address the problem. Wil it solve these problems? Its too early to tell.

So really the only sensible way to use docker in an enterprise environment is to build the images yourself. That demands additional work and high level of skill in another technology just to get the same result.

BTW - the docker images I've used to triall software and decided to take into production have been implemented as conventional installs on LXCs or VMs.