OpenStack – Trials and Tribulation


“Got a job for you this week – install Openstack and lets see it running”… the boss said.. at first this sounded like an easy task but it turned out to be a monumental effort until a simple repeatable solution became apparent.

OpenStack – the wiki page says…..

“OpenStack is a free and open-source software cloud computing software platform. Users primarily deploy it as an infrastructure as a service (IaaS) solution.”

Well these statements are true but getting a working implementation turned into a three week effort. My initial efforts focussed on building an install from scratch using the “Icehouse” release on a Centos platform… the documentation is pretty good, I wont say excellent – far from it, but its pretty good, the step by step walk throughs appear to all be there and looks very workable, there is even separate install processes for Centos and Ubuntu based systems, the only problem, they don’t work!

After 2 weeks of struggling with a Centos deployment, reviewing the Openstack forums and numerous Google searches I gave up and tried an Ubuntu install, using the guide and even Ubutu installation scripts I had to admit defeat until I came across the StackGeek installation web page. By now I was was well seasoned on the issues and errors in the previous implementations so I gave StackGeek a try, after 10 minutes I had a web page I could log into and only minor hassles to iron out mainly around networking but the deployment of instances sort of worked on my tarnished Ubuntu server, I could deploy a machine or 2 and the install looked promising.

The first thing I noticed with the StackGeek implementation is it uses a limited subset of the full Openstack implementation. Where I had previously tried to install Heat, Swift, Neutron, open V switch and all the “high-end” modules thinking this is what OpenStack needs it became apparent that a minimal working OpenStack implementation only needs the following Modules:

  • Keystone – Identity Management
  • Nova – Manages instances
  • Cinder – Provide space for instances
  • Glance – Manages Images
  • Horizon – Web Interface
  • Mysql – Database storage of OpenStack specific data

The key thing with StackGeek was to start clean and be prepared to re-install a clean Ubuntu 14.04 LTS ISO image if you stuff up the networking or any other aspect of it (which I did on several occasions as I tried different implementation options). The StackGeek implementation uses basic bridged networking which for us works fine as all instances are on the same externally exposed network. The implementation uses RabbitMQ which is great as I’m a big fan of the RabbitMQ product and implement lots of queue based systems (search for my other articles).

So the install process goes like this for a single node which is controller and compute:

  • Boot brand new fresh install of Ubuntu 14.04 LTS, make sure its NEW… not some machine you decided might do the job… NEW means NEW fresh build.
  • sudo su -
  • Install openssh-server  (apt-get install openssh-server)
  • Configure your network with an external IP on eth0
    • Enable IPv6 even if you dont use it.
    • Dont configure any other interface until AFTER a succesfull install
    • Ignore vbribr0 it does not have any role in OpenStack
  • Install Git (apt-get install git)
  • Download the latest StackGeek implementation (git clone git://github.com/StackGeek/openstackgeek.git)
  • cd openstackgeek/icehouse
  • Run ./openstack_networking.sh
  • Run ./openstack_disable_tracking.sh
  • Run ./openstack_server_test.sh
  • Run ./openstack_system_update.sh
  • Run ./openstack_setup.sh
  • Run ./openstack_mysql.sh
  • Run ./openstack_keystone.sh
  • Invoke the environment (. ./stackrc)
  • Test keystone by running (keystone user-list)
  • Run ./openstack_glance.sh
  • Test with (glance image-list)
  • Run ./openstack_cinder.sh
  • Run ./openstack_loop.sh
  • Test cinder with (cinder type-list)
  • and test with (cinder create –volume-type Storage –display-name test 1)
  • Run ./openstack_nova.sh
  • Test with (nova service-list)
  • Install private IP range for instances to be allocated with: (nova-manage network create private –fixed_range_v4=10.0.47.0/24 –num_networks=1 –bridge=br100 –bridge_interface=eth0 –network_size=255)
  • Run (route add -net 10.0.47.0 255.255.255.0 gw <your external IP here of eth0>)
  • Run ./openstack_horizon.sh
  • Reboot

So what does this then give us?

The basic implementation allows you to start instances, they will get an IP from the 10.0.47.0/24 network range and they will be able to access the internet but nothing will be able to access them (no external IP) as they all NAT via eth0’s real IP address.

This is where floating IP’s come in. By assigning a floating IP you can now allocate a real IP to your instance. However you need to add a security group to the instance that allows traffic to it (ingress is the term). Once you log into the web interface for OpenStack you will see a project tab and under that “Access & Security“, under that you can Manage Security Groups. Create a group, call it “WEB” and add HTTP and HTTPs to it. You can then assign that to a new instance as needed.

What s in the full suite then?

The “Full” suite of OpenStack consists of the following modules:

  • Identity (Keystone)
  • Compute (Nova)
  • Image storage (Glance)
  • Block storage (Cinder)
  • Dashboard (Horizon)
  • Network (Neutron)
  • Object storage (Swift)
  • Metering (Ceilometer)
  • Orchestration (Heat)
  • Database Storage (Trove)

The additional modules provide levels of scalable services. I can see a need for it for many applications but right now just getting a basic openstack implementation is the first step in building a cloud offering.

Logging into instances

This one took me a while to come to terms with, the documentation out there is very lax on explaining how to log into instances but its surprisingly easy. First, under the Access & Security tab mentioned before, there is “KeyPairs“, create yourself a key pair, use you name (no spaces) or your business name. Save the downloaded “.pem” file to your desktop machine and upload it to the compute node into your home directory.

Using the Horizon web interface create a new instance, use the Ubuntu image for example and note the IP address assign to the instance, lets say it received 10.0.47.2. On the compute node use the following command:

ssh -i /home/yourloginname/yourname.pem ubuntu@10.0.47.2

If successful you should now have a command prompt for the instance! If not, double check the IP and make sure the keypair was assigned to the instance. I have found the Centos images are usually centos@aa.bb.cc.dd and the ubuntu images are ubunto@aa.bb.cc.dd

Adding images

Adding additonal images is very easy, a Google search of Openstack images returned a couple of web sites including the Centos repository, so I started there. Images are controlled by Glance so you download the image to somewhere on each compute node and then register it with glance.

Here is an example for importing a Centos 7 image from http://cloud.centos.org/centos/7/devel/:

#. /root/openstackgeek/icehouse/stackrc
#cd /var/lib/nova/images
#wget http://cloud.centos.org/centos/7/devel/CentOS-7-x86_64-GenericCloud.qcow2
#glance image-create --name centos60_x86_64 --file centos60_x86_64.qcow2 --disk-format qcow2 --container-format bare --is-public True --progress

The –progress gives a neat progess bar as the image is processed, after that you can create instances from it.

When you import an image, dont move it from its location as Glance keeps a reference to the image. Glance does not store the image in the database!

Block Storage Deep Dive

Block storage is provided by the “Cinder” service, its based on LVM which is great but its implemented as a file on disk and uses the loopback device to reference it. If you are good with LVM as any System Administrator should be you can easily delete and recreate the cinder volume to be a volume group based on real physical volumes which are based on hard disks. The openstack_loopback.sh script creates the disk. When you run it make a disk thats big like 100G or more just so you can store a number of instances in it when you create them with a volume. You need to do this if you blow away the instance and want to make a new one from the volume (since that’s where the instance is stored).

Todo

I will be expanding this blog post and creating some new ones down the track, a bit like my SaltStack series which gets lots of hits. The things I have left to do are:

  • Add fixed real IP’s.
  • Add additional Compute nodes.
  • Look into the Heat module.
  • Deploy Glusterfs as the Cinder storage rather than LVM.
  • Implement Docker Containers for client deployments.

Documents

Kvm-forum-2013-openstack

References