04 April, 2021

Adventures in Systemd-Linux-Dockerland

I am just recently starting to use Docker again after an about six year hiatus from it (used regularly ~2014-2015). I do not recommend following the actions I document here in any way, shape, or form, but wanted to chronicle them in case it's educational for others, or in case I don't use Docker again until 2030 and want to know what worked in 2021.

Constructive suggestions or corrections (so I know for the future) are welcome! There are likely better or more efficient ways to do all of the following.

Intro

The proper, real, Docker setup documentation for Arch may be found here. If you actually want to use Docker on Arch, I would recommend trying that. You might also want to get a peep at the systemctl and systemd manpages.

Disclaimer: some of the following may be universal across Linux distributions, but as I run Arch, some of it is likely specific to Arch and perhaps also my own setup.

How I got Docker running as described here, from zero, in Arch, differs rather a bit from how people ought to do it. I was going too fast and assumed I still knew stuff. I also barely tolerate systemd, though I've gotten more accustomed to it in recent years.

I hope this illustrates some potential pitfalls of not properly following official documentation for the thing you want to run or the documentation for running said thing on your own distro when you actually want to get something done fast.

What I did, with explanations:

Took a wild guess here and:

$ pacman -S docker

According to this, you need a loop module and then you'll install Docker from the Snap store. Needing loop tracked with my previous experience with Docker so I didn't question it. I don't use Snap (as recommended in the link) given I have Pacman already in Arch, so I proceeded to skip that part of the tutorial and went on to the next bits. 

We need Docker running in order to create and interact with containers. One can run Docker a couple of different ways: with systemd (systemctl), or just by calling dockerd directly. I went the systemd route this time.

$ sudo tee /etc/modules-load.d/loop.conf <<< "loop"
$ sudo modprobe loop // this loads the loop module we just made
$ sudo systemctl start docker.service

The above call to the docker.service unit with sudo is how this recommended to start Docker, but I felt this didn't make sense for my objective after trying it. With the sudo-prepended call to systemctl, we're as far as I understand affecting the root user environment versus the current user environment even though the start command will not, to my current knowledge, cause docker.service to automatically run during future sessions. Running `sudo systemctl enable docker.service` instead would do that. 

According to the systemctl manpage `systemctl start docker` and `systemctl start docker.service` are equivalent. If not specified systemctl will just add the appropriate unit suffix for us (.service by default), but sudo adds a dimension of weirdness here. It seems to override systemctl's usual requirement to ask for your password. `systemctl start docker` without sudo would have done what I actually wanted to do: simply use systemctl to manage dockerd in a way that is localised to the booted-in user session on my machine. When sudo executes a command in this form, we use it to act as the superuser (root). This I believe implies I'd have to also use sudo to initialise, run, etc any future Docker containers as well, which wasn't the outcome I wanted.

Back to Docker. At this point, Docker Was Not Up.

`systemctl status docker.service` showed the service had failed to start. I got cranky, so went looking for docker.service. The following is what I got by default when I installed the docker package on Arch. As shown here under the Requires section, docker.socket is a dependency of docker.service:

    
$ cat /usr/lib/systemd/system/docker.service
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target docker.socket firewalld.service
    Wants=network-online.target
    Requires=docker.socket

    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd://
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=1048576
    # Having non-zero Limit*s causes performance problems due to accounting             overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNPROC=infinity
    LimitCORE=infinity
    # Uncomment TasksMax if your systemd version supports it.
    # Only systemd 226 and above support this version.
    #TasksMax=infinity
    TimeoutStartSec=0
    # set delegate yes so that systemd does not reset the cgroups of docker                 containers
    Delegate=yes
    # kill only the docker process, not all processes in the cgroup
    KillMode=process
    # restart the docker process if it exits prematurely
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s

    [Install]
    WantedBy=multi-user.target

So I tried:

$ systemctl start docker.socket

Which worked in that docker.socket came up successfully, but this system state change did not help me to do the command from the tutorial I was still at this point trying to follow, `sudo systemctl start docker.service`. Starting Docker either with systemctl start docker, or with dockerd should, I believe, start the socket automatically for you, though you may notice if you systemctl stop docker but still have docker.socket active that the docker.socket unit can start docker again as well.

Since docker is a socket-activated daemon as installed by default on Arch, I could have enabled the docker.socket unit and then just used that as a base for my containers in the future. In this mode, systemd would listen on the socket in question and start up docker when I start containers. This style of usage is meant to be less resource-intensive since docker daemon would then only run as needed. We could also go full socketception and make socket-activated on-demand container units, if we have containers we want to reuse, and then use systemctl to control them.

But still, all I really wanted to do was `systemctl start docker` and just use docker by itself (no sudo, no extra systemctl units) after that so I tried to fix my environment up again with:

$ systemctl disable --now docker
$ systemctl disable --now docker.socket


So that we can have networking in our containers, it appears Docker will automatically create a docker0 bridge in the DOWN state for us when we start it as our own user account using systemctl:

$ sudo ip link
[sudo] password for user:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback ...
2: enp0s25: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ...
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
    link/ether ...

$ systemctl status docker
○ docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
     Active: inactive (dead)
TriggeredBy: ○ docker.socket
       Docs: https://docs.docker.com

$ systemctl start docker
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ====
Authentication is required to start 'docker.service'.
Authenticating as: user
Password:
==== AUTHENTICATION COMPLETE ====

$ systemctl status docker            
● docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
     Active: active (running) since ...; 1h 36min ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 2881 (dockerd)
      Tasks: 20 (limit: 19044)
     Memory: 155.5M
        CPU: 3.348s
     CGroup: /system.slice/docker.service
             ├─2881 /usr/bin/dockerd -H fd://
             └─2890 containerd --config /var/run/docker/containerd/containerd.toml --log-level info

$ sudo ip link           
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback ...
2: enp0s25: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ...
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
    link/ether ...
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether ...

At this point, I had Docker itself working as my own user account. I thought I was ready to pull an image down and try to make a container. 

$ docker pull sickcodes/docker-osx:auto

I was wrong! 

Trying to build a headless container from the docker-osx:auto image as specified in the doc I was following did not fully work:

$ docker run -it \
    --device /dev/kvm \
    -p 50922:10022 \
    sickcodes/docker-osx:auto

I kept getting LibGTK errors which it turned out were not due to anything being wrong with the container, but rather an assortment of packages I still needed to install and a few missing group memberships for my user. I got stuck here for awhile trying to figure out all the different things I didn't have yet from a combination of the Arch documentation, the docker-osx documentation, and the rest of the internet. It's plausible you might encounter a similar error trying to run docker-osx if you don't have xhost available and this stumped me for rather awhile since I figured the issue was just xhost as described in the docker-osx troubleshooting docs at first.

Mini disclaimer: I use yay typically, not pacman, but wanted to provide pacman in the previous example since it's more commonly known. I don't recall which of the following packages are in AUR versus standard repositories, but here is what I think I ended up needing:

$ yay -S qemu libvirt dnsmasq virt-manager bridge-utils flex bison iptables-nft edk2-ovmf

I then set up the additional daemons I needed (actually enabling them this time so they'd be available on future boots) and added my user to the libvirt, kvm, and docker groups.

$ sudo systemctl enable --now libvirtd
$ sudo systemctl enable --now virtlogd
$ echo 1 | sudo tee /sys/module/kvm/parameters/ignore_msrs
$ sudo modprobe kvm
$ lsmod | grep kvm
$ sudo usermod -aG libvirt user
$ sudo usermod -aG kvm user
$ sudo usermod -aG docker user


I figured just in case the LibGTK error *was* really an xhost problem after all that, I'd follow the troubleshooting documentation I was using as well.

$ yay -S
xorg-xhost
$ xhost +


Finally, I was able to create a container and boot into it for the first time using:

$ docker run -it \
    --device /dev/kvm \
    -p 50922:10022 \
    sickcodes/docker-osx:auto

Do note this run command makes you a fresh, brand new container every time you run it. You'll be able to see all of yours with `docker ps --all`.

So that I can install things in my container and reuse it, I'll (after this) use `docker start -ai <CONTAINER ID>` or `docker start -ai <NAME>` instead of another round of `docker run -it`, but you may wish to stick to the run command if you want a new container each time you start up.

I also ran into a small snag when I decided to update my system and then suddenly couldn't start containers anymore with either `docker run -it` or `docker start` following running kernel upgrade ("docker: Error response from daemon: failed to create endpoint dazzling_ptolemy on network bridge: failed to add the host (veth1e8eb9b) <=> sandbox (veth73c911f) pair interfaces: operation not supported"), which was exciting, but fixable with just a reboot. 

The cause of this issue is a mismatch between the running kernel - the kernel still running from before the upgrade - and most recent kernel modules which match the new, upgraded kernel instead of the running kernel. On reboot, we boot into our freshly updated Linux version, which matches our most recent modules, so Docker can load kernel modules which match the running kernel once again.

Thus: Docker is either exactly as terrible as I recall, or worse I had forgotten just about everything useful I used to know about it, but I think from here on out I'll be mostly okay.

No comments:

Post a Comment