How I set up the docker image with systemd and sshd
- Efetue login ou registre-se para postar comentários
For testing ansible roles, I need a docker container that runs systemd or init - a container in which services can be enabled. I also need this container to be accessible via ssh. This way, I don't have to spin up cloud resources (which are not free) in order to test doing something on a remote.
There are a few downsides to using a docker container for this, even for testing. Notably, docker-inside-docker is tricky. If you need to install docker in a docker container, you may run into difficulties. Especially on mac os x, which does not have cgroups. On mac os x it is not possible to mount cgroups in a docker container:
## not possible:
docker run -d --privileged \
--name systemd-test \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /tmp/systemd-journal:/var/log/journal \
-v /etc/machine-id:/etc/machine-id:ro \
my-systemd-image
Because /sys/fs/cgroup does not exist. (And if I'm wrong in this assessment, please mark it in the comments!)
Now, it is possible to go around this limitation. And for the purpose of testing ansible with molecule, an ubuntu image that allows sshd is already maintained: geerlingguy/docker-debian11-ansible:latest
However, I preferred to roll out my own, based on ubuntu. Here is my dockerfile:
## Dockerfile-sshd
FROM ubuntu:22.04
RUN apt update && \
apt install -y \
apt-utils \
build-essential \
curl \
libcurl4-openssl-dev \
nginx nodejs \
python3 \
screen shared-mime-info ssh \
tree \
vim
RUN mkdir -p /run/sshd
expose 80
CMD ["/usr/sbin/sshd", "-D"]
I usually install more packages than is warranted - feel free to remove packages that you consider unnecessary. I personally prefer all the conveniences being installed everywhere.
Then you'd build the image, substituting my tag for your own:
docker build . -f Dockerfile-sshd -t piousbox/ubuntu-ssh:0.0.13
The way I run the container, is I have a local directory volumes/root/ that I mount as /root in the container. It has ssh keys ~/.ssh/authorized_keys for logging in as root - so that the keys are not part of the image. At the same time I also provide ~/.screenrc and ~/.bash_aliases (and relevant ~/.bashrc ) files for convenience. The structure of volumes/root is:
volumes/root/
.bashrc
.bash_aliases
.ssh/
authorized_keys
.screenrc
The content of authorized_keys are the public ssh keys, one per line:
cat ~/.ssh/my-key.pub >> volumes/root/.ssh/authorized_keys
As a quality of life improvement, I add the *filename* of the key as the *email tag* in the public key. And I never rename keys. This way, I usually know which key I'm looking at, if I'm looking at just the public signature:
cat ~/.ssh/my-key.pub
ssh-rsa AAAAB3NzaC1yc...3Ccn95xzZW4TQiRxVoNkz= victor+my-key@wasya.co
Finally, I start the container from docker-compose:
## docker-compose.yml
version: '3.2'
services:
ant_1:
image: piousbox/ubuntu-sshd:0.0.14
init: true
ports:
- 9001:80 ## https
- 9101:22 ## ssh
restart: 'unless-stopped'
volumes:
- type: bind
source: volumes/root
target: /root
- type: bind
source: volumes/letsencrypt/live
target: /etc/letsencrypt/live
- type: bind
source: volumes/sites-available
target: /etc/nginx/sites-enabled
...
Omitting some configuration for brevity. I call local test containers "ants".
docker-compose up -d ant_1
With this setup, I am able to ssh into the container, either manually or with automation such as ansible molecule:
ssh -i ~/.ssh/my-key root@127.0.0.1 -p 9101
.^.