We Moved To LXD

This blog is part of a series.

Catch part one here.

Catch part two here.

Follow us on Twitter @ionCube so you don’t miss future blogs!

 

The default setup for LXD is to use a NAT bridge connection. Great if you want to run multiple containers on say a laptop and test between them; not if you want to, as in our case, provide access from the rest of the LAN.

Of course if you want some excitement and pain it would be possible to expose and redirect ports via iptables. I didn’t want that level of excitement or the pain, especially as this was going to be for rapid cycle of containers, just didn’t want the extra setup and tear down process.

No, just need it to appear on the network with a DHCP lease. I will bypass different excitement and pain I did endure by not using iptables, suffice to say that the various methods such as macvlan and bridge initially sent me round the twist for sometime. That was until I then had a face-palm moment realising that the VirtualBox settings, while bridging OK, wasn’t set to ‘promiscuous’ and that would be why all the containers were failing to router over to the LAN even though they successfully obtained a DHCP lease.

Past that hurdle I repeated the various bridging options. First macvlan failed badly for me. Instead I went with the bridge tool package and manually configured a new bridge linked to the local (VirtualBox NIC) instead of replacing the LXD lxdbr0 bridge that is auto-configured when you start LXD for the first time. I advise sticking with that, it still works as the NAT interface by default and in some instances that maybe what you want.

If you then do need to allow full LAN access then do what I did, create a LXD profile that switches over the container’s eth0 to point to the extra (self configured) bridged interface. Life was much easier when I did that.

          /etc/network/interfaces

          iface enp0s3 inet manual

          auto br0

          iface br0 inet dhcp

          bridge_ports enp0s3

          bridge_stp off

          bridge_fd 0

          bridge_maxwait 0

That will then do the job, it disables the host environment interface creates a bridge which is what all your LAN traffic will use instead. Just note that the interface you would normally see an IP against (enp0s3) won’t have one, the interface br0 will have it instead. This is normal. Information on how to do this was obtained from github and goes something like:

lxc profile create bridged

lxc profile edit bridged

 

name: bridged
config: {}
devices:
eth0:
nictype: bridged
parent: br0
type: nic

 

To apply:

lxc launch trusty -p bridged newcontainer

 

or

lxc profile apply containername bridged

Switching between is then nice and simple though you do need to restart the container after making the change.

With everything settling down now with the LXD setup I’m much happier with the way individual containers can be provisioned ad hoc and with less overall system impact. I’ve yet to play with the container specific group settings such as memory and CPU restrictions. They really don’t apply in this case. The VirtualBox disk images though remain as my biggest concern so I may consider moving the whole environment to bare metal. This will of course be another migration mini-project and no doubt another article.

 

Things I Didn’t Know – Virtual Box Overload: We Moved To LXD – Part 3
twitterlinkedinmail

Leave a Reply

Your e-mail address will not be published. Required fields are marked *