This blog is part of a series.
Catch part 1 by clicking here!
Catch part 3 by clicking here!
Follow us on Twitter @ionCube so you don’t miss the next one!
Canonical are the guys who maintain LXD, so of course all the new blurb pointed to Ubuntu 16 which I thought would be a good starting point to ensure a smooth start. I often go for the recommended tools until I become familiar with them before venturing out into any unusual territory, otherwise who’s to know if any failure relates to the tool or the non standard environment?
Anyhow, the first curious and knowledge gap started when activating LXD with: lxd init if I wanted to use a directory or ZFS pool. First a test run with standard directory storage, and that seemed like a good idea, no use going into something complex and unknown before I’m familiar with the tool set.
After that all seemed to work OK and yes, LXD containers worked much like I’ve seen in Docker and to some extend in Vagrant, images are pulled from remote sources and cached along with any updates. Start up time was a little slower than I was expecting with directory storage and that I found annoying.
Once I was familiar with starting and stopping I decided to have another go and see what this ZFS option was about. Again the awesome Canonical documentation seemed to suggest it was a better way to go. They weren’t wrong there!
ZFS is a file system that has been around for a long time on the commercial Unix world and this version of Ubuntu was the first time it appeared as a ported version. It offers may features that make LXD really useful. The first is copy-on-write which means the container startup times are much faster as the delay on the previous directory format meant that the whole image was expanded into the new container at initial start. Not so much with ZFS, the image is booted and only changes you make are written to the container. An apparent instant startup!
Another plus point is snapshotting with ZFS, which much like VirtualBox snapshots they allow restore points, but unlike VirtualBox snapshots there are no downtimes at all.
I wasn’t familiar with ZFS and I wish I had been at the start, I might have taken a different approach to implementing it. The first thing I would have done is enable ZFS compression straight away as by default it is not enabled. I found after a short space of time that the ZFS image files grew quickly and they were down to the following settings I wasn’t aware of:
- ZFS compression is disabled. Enabling compression only applies from that point on. It is advised to enable it (why not default it then?) as the compressed form provides performance benefits to disk and memory I/O.
- LXD by default enables image updates and checks on a ridiculously high frequency, something like every six hours. This might have not been a problem had the LXD install been on bare metal but my testing was within a VirtualBox VM and the VirtualBox disk image was expanding at an alarming rate even when it appeared to be idle. It wasn’t until I found some LXD logs that indicated that it was polling and updating. I disabled the image update. I disabled the image update. This is a little messy, especially doing retrospectively, but the details from S3hh’s Blog were really helpful. Specifically the command lxc image info which shows meta-data including the auto update flag. Using lxc image edit you can change this to disable it. There are also LXD config options to globally change such as:
- lxc config set images.auto_update_cached true
- lxc config set images.remote_cache_expiry 1
- lxc config set images.auto_update_interval 0
- Coupled with this update and high disk consumption was the fact that ZFS holds on to images until they are no longer linked to anything. This again impacts on the VirtualBox disk images as the ZFS pool did not release space until all linked images were done with. Basing LXD containers off of other containers using the copy-on-write features meant that images were daisy-chained in many places taking space.
I really like the ZFS file system with LXD. It provides many great features however the impact that it has to the underlying system requires more consideration than I had anticipated.
The LXD containers themselves introduced other frustrations which was compounded by the extensive Canonical and other internet documentation.
Now that you’ve finished this post you can find part 3 here!
Image courtesy of Canonical.