Sean Collins

sean [at] seanmcollins [dot] com

GPG Key ID: 0xf60f564978913931

sean [at] coreitpro [dot] com

GPG Key ID: 0xA1D7E590

sc68cal on FreeNode

profile for Sean at Stack Overflow, Q&A for professional and enthusiast programmers

Building a cheap, compact, and multinode DevStack environment for a home lab

One of the first things that any new OpenStack contributor will do, is to download DevStack and run stack.sh in order to set up a development version of OpenStack.

It could be a virtual machine, it could be on their laptop. Heck, it could be on a virtual machine in a public OpenStack cloud (this is what is used by the OpenStack CI system).

Background

I personally have run DevStack a wide range of environments. For a long time, I ran it by using vagrant_devstack. The advantages were that by using Vagrant, I could quickly build, use, and destroy virtual machines on my laptop, and also share Vagrant configurations with fellow developers, and we’d all have identical environments.

However, there were differences between how DevStack would configure Neutron networking, and how Neutron was deployed and configured in production. Some of the lower level networking details were significantly different compared to the basic DevStack installation, and sometimes even trying to emulate them in VirtualBox was a hassle. Sometimes I even found myself fighting with Vagrant to get it to work with VirtualBox the way I wanted it to.

For these reasons, I started to deploy DevStack onto bare metal machines that ran in a lab, and worked with Anthony Veiga to get our development environment and lab to match what was being deployed in production.

Part of the work was modifying DevStack to support configuring the bare metal node interfaces similar to how our production machines were configured, and also get DevStack to create Neutron networks with the same settings as in production.

I built some tooling to help orchestrate DevStack runs on the bare metal nodes, so that I could create templates for DevStack’s local.conf files, and deploy them to nodes based on roles.

Having physical hardware, and also having multiple physical machines was extremely useful. It mimics how OpenStack is installed in production environments, and especially in the networking space, helps show all the different moving parts that work in concert to implement the Neutron API. Configuring the top of the rack switch to connect the physical machines together, how packets flow out of compute nodes and to the network node, and being able to control each part of the interaction.

After leaving Comcast and joining Mirantis, my access to the lab environment where I had done all my development and testing was obviously revoked.

It sucked.

So, I set out to build a lab in my home, so that I could continue doing development work.

The $1000 challenge

Space, power, cooling, and cost were the main factors for picking parts. I wanted to have a 3 node setup, so each node needed to be cheap, and compact.

There’s been a lot of press about Intel’s Next Unit of Computing, so I decided to price out the components.

The total cost? $965.94.

Creating One Touch Deployments

Similar to a virtual machine provisioned by Vagrant, I wanted my hardware to be a one touch provisioning. If I accidentally broke something, I wanted to be able to wipe and reinstall, without having to babysit it.

Since my home network already uses a FreeBSD machine for firewalling, NAT, and DHCP, it was easy to add PXE configuration to the mix.

Setting up PXE and TFTP on FreeBSD

Adding PXE to my isc-dhcpd configuration was easy.

subnet 192.168.1.0 netmask 255.255.255.0{
	range 192.168.1.10 192.168.1.254;
	option routers 192.168.1.1;
	server-identifier 192.168.1.1;
	next-server 192.168.1.1;

	# PXE boot for NUCs
	filename "pxelinux.0";
}

For TFTP, which handles transferring pxelinux.0 to each node for booting, I configured inetd.

daishi# grep 'tftp' /etc/inetd.conf
tftp    dgram   udp     wait    root    /usr/libexec/tftpd      tftpd -l -s /tank/tftpboot
tftp    dgram   udp6    wait    root    /usr/libexec/tftpd      tftpd -l -s /tank/tftpboot
daishi#

Here is the pxelinux config, which I have set up to provide both Ubuntu 14.04 kernels and Ubuntu 15.04 kernels.

The most important part of this file is the ks keyword, which instructs the operating system to download a file via HTTP, and use it for kickstart.

The kernel and initial ramdisk were manually fetched from the Ubuntu’s netboot installer directory, at the following URLs

  • http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/

  • http://archive.ubuntu.com/ubuntu/dists/vivid/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/

They are saved on my FreeBSD machine in the following directory

scollins@daishi ~ » ls /tank/tftpboot
pxelinux.0   pxelinux.cfg ubuntu1404   ubuntu1504

The boot directory is structured as follows:

scollins@daishi ~ » find /tank/tftpboot/ubuntu1404
/tank/tftpboot/ubuntu1404
/tank/tftpboot/ubuntu1404/initrd.gz
/tank/tftpboot/ubuntu1404/linux

initrd.gz is the initial ramdisk that is used for booting Linux, and the linux file contains the linux kernel.

Kickstart

Kickstart is a feature of some Linux distributions (originally just Red Hat, but now Ubuntu has support for it) to automate installations, so that as soon as a node boots it will run the installation with the options I have already chosen, instead of requiring manual intervention.

I have an Amazon S3 bucket that I use to serve out kickstart scripts - because I am too lazy to run my own HTTP server at home - after doing it for a number of years I’m happy to pay $0.10 to Amazon to have them deal with it.

First DevStack run

After installing the base operating system, I run the rake clone task to ssh into each node and clone DevStack. After the inital clone, I then run rake again, which will generate a local.conf file for each node, scp them onto each node, then invoke DevStack’s stack.sh script to build out the cluster.

The settings.yml file handles the configuration part, of which node is the controller and which are just compute nodes, and their hostnames and IP addresses. Currently IP addresses are used in the local.conf file, but eventually I’m going to move to hostname only, so that I can use IPv6 for the control plane, which Brian and I talked about in Tokyo.

---
controller:
  - hostname: devstack-1.coreitpro.com
    ip: 192.168.1.246
user: stack
nodes:
  - hostname: devstack-2.coreitpro.com
    ip: 192.168.1.216
  - hostname: devstack-3.coreitpro.com
    ip: 192.168.1.217
devstack_branch: ipv6_fixes
devstack_repo: https://github.com/sc68cal/devstack.git

Summary

So, it is possible to have a DevStack cluster that is multinode, for under $1000 USD. It’s been a serious productivity enhancement, and I’d recommend it to anyone.

In a follow-up article, I plan on discussing some of the networking in this lab environment, and how you can use this setup to explore more parts of Neutron and OpenStack.