Atomic Host – Basic setup and usage

It is surprising how little-known the Atomic Host operating system is even though it is very intriguing and forward-looking. Let me explain some of its underlying concepts, advantages and challenges and show you how to get started.

A primer

First, I would like to quickly point out some of the main advantages and challenges of Atomic Host in comparison to a conventional operating system. If this is already clear to you or you are only interested in how to setup and use Atomic Host, feel free to skip this part.

Advantages of using Atomic Host

Basically, the operating system itself is a snapshot, which means it is immutable. There is no package manager like yum or apt for installing packages. Instead, everything necessary for running containers is pre-installed, allowing us to add every utility or software we might need in the shape of a container or pod. This might not yet seem like an advantage, but hang on.

Atomic Host updates consist of downloading an operating system snapshot and booting into it, as if it were a new kernel version. With a conventional operating system, identifying what went wrong after a package update is difficult. With Atomic Host, simply boot into the former snapshot version and you have the exact same state as you had before the update.
The same principle applies to the applications running on top of Atomic Host, because container images are always version controlled. The new application version doesn’t work? Rollback to the version before and fix the image that failed. One could probably rather easily automate an OS snapshot or container image rollback if initial tests after a reboot fail.

Another advantage from my point of view is that Atomic Host forces us to better account for the concept of containers and (maybe partially at least) microservices. Docker might be quite the hype with developers but understanding the underlying concepts is another matter. I’ve seen plenty of bad examples where containers were tightly tied to the underlying operating system. The immutability of Atomic Host forces the creation and usage of containers which are independent of its host they’re running on.

Challenges

Working with Atomic Host needs some getting used to. Only `/etc` and `/var` are writable. So data that should persist across updates or reboots has to be put into `/var`. Note that a lot of directories in `/` are just symlinks pointing into `/var`, so that’s the place where we should put our data into.

Another challenge is container management. How do we automatically start containers after a reboot? There are several possibilities, systemd and Kubernetes are two of them. In this article, I’m going to use systemd because it is easier to use. Fedora Atomic Host 26, which is currently in beta, will bring containerized Kubernetes which will simplify its use.

Not only do we need to make sure to put data into the right place on the filesystem so that it persists reboots or updates, but also that we do the same for data inside containers. Additionally, SELinux is in enforcing mode by default, which is a good thing (really). So there are some caveats that might pose a challenge at first.

There’s a good knowledge base article from Red Hat which explains the differences between a RHEL server and a RHEL Atomic Host in more detail.

Installation

I am assuming that the installation of an operating system image does not impose a challenge for you, so will not cover this step in-depth. Project Atomic has a good Get Started with Atomic page which lists different OS images and explains how to install them. I have used the Fedora Atomic Host 26 ISO. The nice thing about Atomic Host is that the choice of distribution does not matter that much because everything is implemented as containers. So for this tutorial prerequisites could be wrapped up as:

  • docker
  • systemd
  • the `atomic` cli tool
  • that’s it

Initial setup

Update

Before doing anything, we want to make sure we’ve got the latest Atomic Host version installed and reboot if necessary (the `atomic host upgrade` command will tell us if a reboot is needed):

$ sudo atomic host upgrade
$ sudo systemctl reboot

User creation and ssh

If you have not already created a user and set up ssh, do that now.

Docker storage setup

Make sure docker storage is configured correctly. That means it should not be using loopback devices. Check the output of the command `docker info`. If you see a message similar to

WARNING: Usage of loopback devices is strongly discouraged for production use. [...]

that means docker is using loopback devices. Go ahead and reconfigure it. It is not mandatory but recommended.

By default, the docker-storage-setup script tries to create a logical volume thinpool in the same volume group as the root filesystem. This might be inappropriate for your setup, because you might want the docker thinpool to exist in another volume group or on another storage device. Make the necessary changes by editing the file `/etc/sysconfig/docker-storage-setup`. If the volume group you want to use already exists, use:

VG=docker-vg

while replacing `docker-vg` with the appropriate volume group name. If you want to use a new partition or disk and let `docker-storage-setup` create a new volume group. Then define them both like

DEVS=/dev/vdb
VG=docker-vg

(likewise replacing `vdb` and `docker-vg` with your storage device identifier). If the volume group doesn’t yet exist it is mandatory to also define `DEVS`.

Now if the `docker` daemon has already been started, we need to reset it. Stop docker:

$ sudo systemctl stop docker

Now make sure the file `/etc/sysconfig/docker-storage` is empty; `docker-storage-setup` will fill in the appropriate options based on the settings we just put into `/etc/sysconfig/docker-storage-setup`.

Also make sure everything inside `/var/lib/docker/` is removed by executing

$ sudo rm -rf /var/lib/docker/*

We are now ready to let `docker-storage-setup` setup the new docker thinpool:

$ sudo docker-storage-setup

If all is good, you can start the docker daemon again:

$ sudo systemctl start docker

Ok, we’re now ready to deploy containers!

Deploy containers

As a nod to an extremely cool open-source project for home automation, I’m going to use home-assistant as example. Regardless of what kind of application you’d like to deploy, Docker Hub is our primary resource for container images, so look there if you want to find out if your application of choice already exists as a containerized version.

Download docker images

Downloading docker images on an Atomic Host is as simple as executing

$ sudo atomic install ​homeassistant/home-assistant

This command however does not yet start the container for us. We could do that with the typical `docker run` command. Or we could use a container orchestrator like Kubernetes. To keep things simple for now we’re going to use `systemd`.

Handle applications with systemd

The systemd unit file for home-assistant looks as follows:

cat /etc/systemd/system/home-assistant.service 
[Unit]
Description=Home Assistant
Requires=docker.service
After=docker.service

[Service]
Restart=on-failure
RestartSec=10
ExecStart=/usr/bin/docker run --rm --name %p --volume /opt/home-assistant:/config:Z --volume /etc/localtime:/etc/localtime:ro --network host homeassistant/home-assistant
ExecStop=-/usr/bin/docker stop -t 30 %p

[Install]
WantedBy=multi-user.target

The important parts are in the `[Service]` section of our file. The `ExecStart` line defines the start command. Please refer to the docker documentation if you are not sure what the different parameters mean. What’s special on this line is the `:Z` part used at the end of the directory, specified as the `–volume` parameter. This part takes care of the SELinux context while the container is running. Specifying `:Z` means that only a single container has access to the directory, a lower-case `:z` means every container has access in case you have a shared volume scenario. Be aware that you still have to make sure file permissions are correct so the application user has the correct permissions. In case of home-assistant, we do not have to do anything special, so we just create the directory where we could then put home-assistant’s configuration into (which is not part of this tutorial):

$ sudo mkdir -p /opt/home-assistant

After creating the directory and the unit file, `systemd` needs to pick it up:

$ sudo systemctl daemon-reload

Now we’re able to control our container with `systemd`. Use the known commands `start`, `stop` etc. If you want to start home-assistant at boot time, specify

$ sudo systemctl start home-assistant.service
$ sudo systemctl status home-assistant.service
● home-assistant.service - Home Assistant
   Loaded: loaded (/etc/systemd/system/home-assistant.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-09-06 22:45:37 CEST; 2s ago
 Main PID: 6971 (docker-current)
    Tasks: 9 (limit: 4915)
   Memory: 5.5M
      CPU: 18ms
   CGroup: /system.slice/home-assistant.service
           └─6971 /usr/bin/docker-current run --rm --name home-assistant -v /opt/home-assistant:/config:Z -v /etc/localtime:/etc/localtime:ro --network host h

Sep 06 22:45:37 host.example.com systemd[1]: Started Home Assistant.
[...]

Update docker images

home-assistant releases a new version every two weeks, so we want to update our docker image if that happens. To update the docker image, simply execute

$ sudo atomic images update homeassistant/home-assistant

The command checks for a new version and if so downloads it right away. You might already be thinking „why not add that command to the systemd unit?“. Here’s the line that goes under the `[Service]` section of our unit file:

ExecStartPre=-/usr/bin/atomic images update homeassistant/home-assistant

Be aware though that adding this command has performance implications, i.e. every time you restart the `home-assistant` service, the update check is executed which might force you to raise the timeout value of the unit file.

Conclusion

This concludes our look at Atomic Host. If you are interested in more technical tutorials, have a look at the very detailed Atomic Host 101 Labs on dustymabe.com or watch this blog for more interesting articles.

Kommentare sind geschlossen.