Installing and using Docker on SlackwareLast edited on Jun 6, 2015


Imagine you want an asterisk system complete with mariadb and apache but you don't want to install all that on your day-to-day system. You could create a VM. but vms are heavy and have quite some overhead. What about backups? you can't copy live running image (not with qemu at least).

Enter Docker. Docker makes this really simple. Docker encloses an environment like a chroot would. But it acts more like a vm. With docker, the equivalent of a vm is a container. You create your filesystem and install the needed software in your container and run it. But Docker lets you run 1 command only. You start the container, the command executes, in its environment, and the container stops. All this using the host's kernel but separated with namespaces and cgroups. Nothing prevents you from running a script as the command. So you make a script that starts httpd and asterisk then load bash. the container will run as long as bash doesn't exit. so if you attach your session on the container and "exit" bash, the container will stop amd asterisk and httpd will shutdown. The container does not start "init" (unless it's the command you chose to invoke) so it's not like an entire OS is brought up. Only the command you run is ran in the container's FS but with the host's kernel

Docker allows you export running containers to a tarball. the tarball contains the entire FS of the container (not the memory) You can then import it anywhere where Docker runs.

Installing docker on slackware

These are the instructions for installing docker 1.7.0 on slackware 14.1 with kernel 3.11.1. I also tried with slackware 14.0 but the rc.S script does not mount the cgroup hierarchy properly So I modified it to do it like in 14.1 I could not get it to work with kernel 3.10.17 and did not bother to troubleshoot since I had a 3.11.1 on hand. 1.7.0-rc1 did not work for me. I needed commit 6cdf8623d52e7e4c5b5265deb5f5b1d33f2e6e95 in. So I cloned the bleeding edge from git but then 2h later rc2 came out.


Before anything, you should make sure that your kernel is compiled with


I actually made a script to enable all those settings

     "NF_NAT_IPV4" \
     "CONFIG_VETH" \

for i in ${CFG[*]};
    sed -i "/^# *$i/c\\$i=y" $1

Maybe some other flags need to be set on your kernel, but these were all the ones I was missing. There is a uitlity that you can download to check those settings: https://github.com/docker/docker/blob/master/contrib/check-config.sh

Download, compile, and prepare the environment

At first, I tried downloading the binaries but docker was complaining about "Udev sync is not supported". I found out that was because the binary is statically link and it causes some problems that I didn't care to look into. So I opted for building from source. The first step is to get "go". I didn't want to leave this on my system so I just put it in a temporary place and then deleted it.

wget https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz
tar -zxf go1.4.2.linux-amd64.tar.gz
mv go /opt

#You should make that permanent if you intend to keep Go after building Docker

# download docker source.
wget https://github.com/docker/docker/archive/v1.7.0-rc2.tar.gz
tar -zxf v1.7.0-rc2.tar.gz
cd docker-1.7.0-rc2

#docker won't build because there is a header file that won't be found. There is a patch
#for that but let's do it manually here:
sed -i "/ioctl\.h/c\#include \r\n#include \r\n#include \r\n#include \r\n#include \r\n" daemon/graphdriver/btrfs/btrfs.go

#note that DOCKER_GIT_COMMIT needs to match the version you have downloaded
GOROOT=/opt/go AUTO_GOPATH=1 DOCKER_GITCOMMIT="395cced " ./hack/make.sh dynbinary
cp bundles/1.7.0-rc2/dynbinary/docker-1.7.0-rc2 /usr/sbin/docker
cp bundles/1.7.0-rc2/dynbinary/dockerinit-1.7.0-rc2 /usr/sbin/dockerinit

#remove go... or not. It's up to you. If you leave it there, you might want to permanently add it to your PATH
rm -Rf /opt/go

Prepare network bridge

In my case, since I was already using KVM/qemu, I had a bridge alrady setup. But this is what would be needed

#create bridge
brctl addbr br0
ifconfig eth0 down
ifconfig br0 netmask broadcast up

# add eth0 as member of the bridge and bring it up.
brctl stp br0 off
brctl setfd br0 1
brctl sethello br0 1
brctl addif br0 eth0
ifconfig eth0 promisc up
# setup default gateway.
route add default gw

You might want that last example to run at boot time. There is a way to setup a bridge with the init scripts but I just added those lines in rc.local before launching the docker daemon.

There is currently no easy way to assign a static IP to your container. Docker will choose an IP in the range of you bridge. But this isn't perfect. It seems to pick some addresses that are already used on my network. Issue 6743 on githubis opened for that. But for the time being, I've hacked the code to make this possible. I won't create a pull request since they are already working on a more elegant solution. But meanwhile, you can download my fork if you need it. My fork on github.. That repo contains the patch to build on slackware and also adds a "--ipv4-adress=A.B.C.D" option to docker run.

Auto start

Finally, you should add this in your rc.local script.

/usr/sbin/docker -d -b br0&

Building a container, running it and doing backups

My use case

What I'm looking to do is to isolate my home automation services in one container that I can easily transport from one computer to another (or even to a VM). In case I get a hardware failure, I want to reduce the downtime of my house services. Those include Asterisk, httpd, MariadDB, CouchDB, DHAS, cron jobs, and some more. I want to be able to make a daily backup of the container and always be able to launch it from somehwere else where Docker is installed.

Creating a container

docker run vbatts/slackware:14.1 -ti /bin/bash

That command will create a container from a base image "slackware 14.1" from docker hub. The image will be downloaded automatically. Then bash will be invoked. the -t -i flags will give you an interactive TTY attached so you will be able to interact with bash. From there, install whatever you need in the container. Download gcc, install it, etc. Once you are done, exit bash. By exiting bash, the container stops. You can now commit your changes to a new base image

docker commit ContainerID awesomeNewImage

Now you have a base image of your own that you can share with other people. Now create another container for your real use case from the base image.

docker run -tid --restart=always awesomeNewImage /root/start.sh

The -d flag runs the contain in the background. You can access it using "docker attach". --restart=always will make the container restart automatically when you exit it and when the docker daemon starts (after a host reboot for example). When you were setting up you base image, you could have created a start.sh script that invokes asterisk, httpd, mysqld and couchdb then bash. To detach from the container without stopping it (leaving your command running) you can Ctrl-P, Ctrl-Q.


An easy way to backup is to regularly export the container (Cron job?).

docker export -o backup.tar 

This will create a tarball container the entire filesystem of your container. Then to restore, either locally or on some other machine runner docker:

cat backup.tar | docker import - restoredbackup