In my previous blog, I wrote about my experience installing a single-node OpenStack Icehouse on a desktop/laptop using RDO. In this blog I will be demonstrating how to set up a simple Multi-Node OpenStack installation on AWS (Amazon Web Services). This is ideal in situations where we do not have access to hardware for setting up a multi-node installation or simply don’t have the time to setup machines for the same.
Since I intend to just demonstrate the possibility of running OpenStack on AWS, I will keep the architecture simple. I would also like to state that such setups should be used purely for trying out OpenStack and not for production.
My architecture will contain 1 Controller Node & 2 Compute Nodes. I will also try to keep the costs as low as possible. So I will use the smallest EC2 instance that will do the job. Unfortunately we cannot use t1.micro instances which are eligible for free tier since the memory in them is very low. (I did try though and it failed with “not enough memory” error message).
Compute (EC2) – 3 X General Purpose m1.small instances, EBS backed
Networking – Default Private and Public IP assignment by AWS. In case you wish to assign IPs that won’t change after a reboot, assign Elastic IPs.
Storage (EBS) – 20 GB root volumes on each instance. No additional storage.
Amazon Machine Image (AMI) – Red Hat Enterprise Linux 6.5 (PV) 64 bit
Step 1: Pre-installation tasks
Well, before we can install OpenStack we have to get the EC2 instances running.
Click Launch Instances in EC2 Dashboard.
Select an Amazon Machine Image (AMI)
I choose Red Hat Enterprise Linux 6.5 (PV) 64 bit for the purpose of this demonstration. You can also choose the Red Hat Enterprise Linux 6.5 (HVM), however the available instance types may be change.
Next I choose the Instance Type, General purpose m1.small.
The next step is to configure the instances. Here there are 3 important things you need to ensure is set correctly.
Number of instances as per the architecture – 3 in my case
The desired VPC is selected in the Network section (Create a new one if needed, for openstack)
A check next to Automatically assign a Public IP address to your instances
Next is the storage, the default is 10 GB. I changed the it to 20 GB. I did not add any additional storage since I was just testing.
Finally in the Security Groups configuration section, choose an existing security group or create a new one that has at least SSH (Source IP restricted) and HTTP / HTTPS traffic enabled (from any source). I have created a new Security group called OpenStack and I have enabled all traffic on all protocols from everywhere. This a test environment and at this point it does not matter.
Review and click on Launchto fire up the instances. When you click on Launch it will ask to either create or choose an existing Key Pair. If creating a new one, it let’s you enter a name and downloads the Private Key for you. Store it safe as this is very important.
I already had one created, so I choose that and acknowledged that I have the Private Key.
Click Launch Instances, sit back and relax. Let AWS fire up the instances for you and get it running.
Before moving on to the next step, make a note of the Public IPs assigned to all the 3 instances. They will be required to SSH in and later at the end access the OpenStack dashboard. It is also recommended to name the EC2 instances as Controller Node, Compute Node etc. This will help in easier identification of the IP addresses later.
Step 2: OpenStack Installation
Connect to all the 3 instances using Public IP of the EC2 instance and the private key. The default user to login to AWS EC2 instance is ‘ec2-user“. Note that root login is disabled in EC2 instances and all the commands will be performed using ec2-user. So, using sudois required. Once SSH’d, run the below commands on all 3 instances.
Update the packages.
sudo yum update-y
Set SELinux to permissive mode. This is again done for a smooth installation just for the purposes of this demonstration.
sudo setenforce permissive
Run the next set of commands only on the EC2 instance that will function as the Controller Node.
The next command will create the Public-Private RSA key.
Print the newly created Public RSA key on the screen.
Copy the public key printed on the screen. It looks something like this.
Perform the next set of steps on all 3 EC2 instances.
Add the copied public key into the authorized_keys file. I would like to emphasize here that it needs to be added or appended to the file and not replaced. So you should have two keys in there. One for EC2 key pair and other for the root account of the Controller Node. It should look something like this below:
We also need to copy the same public key into authorized_keys file under the ec2-user account.
Again ensure you are adding/appending the key to the file. Finally it should have two keys in there.
We are copying the RSA public key to authorized_keys file for both root and ec2-user to ensure smooth installation of OpenStack. Lack of this public key in either one of them will throw an error similar to the one below:
Check you are able to ssh into the other EC2 instances from any one of them using both ec2-user and root account.
sudo ssh ec2-user@<privateip>
sudo ssh root@<privateip>
Perform the below steps on the EC2 instance which is the Controller Node.
Edit the answer file to change the IP address of the NOVA_COMPUTE_HOSTS
sudo vi answer-file-5172014.txt
Locate CONFIG_NOVA_COMPUTE_HOSTS and replace it with the Private IP address of the other two EC2 instances (which will server as Compute Nodes). It should look something like in the screenshot below.
CONFIG_NOVA_COMPUTE_HOSTS=<PrivateIP of Compute Node1>,<PrivateIP of Compute Node2>
NOTE: There are many other customizations that can be done through the answer file like specifying passwords, choosing which OpenStack components to install and where etc, but we won’t deep dive into that here.
Finally start the OpenStack installation.
Again time to relax, grab a coffee and watch packstackdo the magic.
Step 3: Post Installation
Once the installation is completed it will print some additional information like IP address of the Horizon dashboard, location of log files etc. It will look something like this:
If there is a mention about rebooting the instance, please do so.
Perform the below steps on the Controller Node.
Change “ALLOWED_HOSTS” to reflect
ALLOWED_HOSTS=[' * ']
Restart httpd service to reflect the changes done above.
sudo httpd restart
It’s time now to finally access the Horizon dashboard and this is where the tricky part is. Although the installation completion report stated that the IP address of the Horizon dashboard was the Private IP, we can only access it from outside (the VPC) using the Public IP of the Controller Node EC2 instance.
On the EC2 instances dashboard, click on the instance that is the Controller Node and make a note of the Public IP. Something like below:
Open a browser, type the Public IP address & GO.
The default login is “admin” and in order to get the password for the first time login, open a terminal and run the commands below as root:
This will printout the keystone credential details for admin user, note down the password.
Enter username and password at the Horizon Dashboard login screen and sing in.
If you are interested in learning more about OpenStack and getting some hands-on, we offer Red Hat OpenStack Administration (CL210) course. If you are interested, click here for more details and to fill out the form. And we will get back to you.
If you liked the post, please share and keep coming back for more articles around OpenStack.