Top Nav

Archive | RackSpace

Install Nova Client With Cloud Networks Support

Nova is the the cloud management command line utility for RackSpace and other OpenStack clouds. Special setup is required to use Nova with RackSpace Cloud Networks.

Here’s the docs from RackSpace:

http://docs.rackspace.com/servers/api/v2/cn-gettingstarted/content/ch_overview.html
http://docs.rackspace.com/servers/api/v2/cn-devguide/content/index.html

Here are the steps:

1. Install python-pip package

For RedHat/CentOS we need to install EPEL repository if it’s not already installed:

Now we can install python-pip:

2. Install rackspace version of nova client

If “python-pip” does not work then try:

3. Reinstall nova client from latest github souce:

Note that you might need to install git to complete this step:

 4. Add credentials to .bash_profile

You’ll need to set the appropriate region, account number, username and password.

5. Verify credentials

If this fails with an error like:

then install “pbr” module:

6. Install RackSpace Cloud Networks Virtual Interface extensions

7. List Cloud Networks

 

0

Recap Script

Useful logging script from RackSpace:
https://github.com/rackerlabs/recap.git
 

See the readme file for instructions.

0

GlusterFS Cluster With Ubuntu On RackSpace Cloud

GlusterFS Storage Cluster - New PageWe want to build a high-availability storage cluster based on GlusterFS using RackSpace Cloud Servers on Ubuntu. Additionally we want to use RackSpace Cloud Networks to provide private networks for replication and cluster access.

There’s a great doc by Falko Timme on Howto Forge titled:

High-Availability Storage With GlusterFS 3.2.x On Ubuntu 11.10 – Automatic File Replication Across Two Storage Servers

We’re going to follow this doc with a few modifications for Cloud Networks and using a newer version of Gluster.

As shown in the diagram, we’ll have two Cloud Networks, one for replication between the servers and one for access by clients to the storage.

Part 1 – Server Setup

1. Create servers and networks – Login to your RackSpace Cloud account and create two server following these guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “Ubuntu 13.04 (Raring Ringtail)”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “StorageInternal”. Then create a second network named “StorageAccess” When creating additional servers, make sure that you select the “StorageInternal” and “StorageAccess” networks.
  • If you need more storage then is included in the base image or if you want higher performance the consider using RackSpace Cloud Block Storage. This is not covered in this article.

Here are the addresses assigned to the servers that we created but yours my be different:

and:

2. Open SSH shell to servers – Open an SSH shell to each of the servers. Unless noted otherwise the following steps should be repeated on both servers.

3. Upgrade packages – Update repository listings and then upgrade all packages to latest versions:

4. Setup /etc/hosts – Add hostnames on the “StorageInternal” network for each server.

5. Install glusterfs PPA repository – The default apt repositories for Ubuntu only offer version 3.2.7 of GlusterFS. We wanted a newer version so we’ll be modifying the install process to use the PPA distribution.  Execute the following commands

6. Install glusterfs server – Use this command to install:

7. Confirm gluster version – Make sure you have the correct version:

If you get an older version then you probably forgot to run “apt-get update”. Just remove the older package, update and reinstall:

8. Firewall configuration – Next we need to get a basic firewall configured to protect the servers. Ubuntu includes UFW (Uncomplicated Firewall) so we’ll go with that:

9. Add nodes to cluster – On fileserver1, run this command to register fileserver2 to the trusted storage pool:

The results should look like this:

10. Check cluster status – On each node run “gluster peer status” and confirm the peer addresses. Here’s fileserver1:

and here’s fileserver2:

11. Create volume – Next we need to create the actual GlusterFS volume:

You should get back a response like:

12. Start the volume – Now start the volume to make it available to clients with:

You should get a response like:

The howto by Falko Timme mentioned at the start of this article gives some useful advice and tips on how to troubleshoot if the volume fails to create or start.

13. Verify volume status – Check the status of the volume with:

You should get a response similar to:

14. Allow client access – We want to allow all clients on the “StorageAccess” network access to the cluster:

Now show volume info again:

And you should get:

Setup on the server is complete. Now it’s time to add clients.

Part 2 – Client Setup

1. Install glusterfs PPA repository – Execute the following commands

2. Install glusterfs client – Execute the following command:

3. Confirm version – Verify that you have the correct version installed:

root@db1:~# glusterfs –version
glusterfs 3.3.2 built on Jul 21 2013 16:38:01

4. Create mount point – Create a directory that will be the mount point for the gluster partition:

5. Setup /etc/hosts – Add the following entries to /etc/hosts:

Notice that we’re mapping both the StorageInternal and StorageAccess network names to the same IP addreses on the StorageAccess network.

6. Mount volume – Execute the command to mount the gluster volume:

7. Edit /etc/fstab – Add this line to /etc/fstab to make the mount start automatically on boot:

And that completes the client setup procedure. You now have a working GlusterFS storage cluster and a connected client.

Comments and suggestions are welcomed.

5

Percona XtraDB Cluster on Ubuntu

Percona XtraDB Cluster - New PageIn this article we’re going to build a Percona XtraDB Cluster using a pair of RackSpace Cloud Servers. Percona XtraDB Cluster is a MySQL compatible replacement supporting multi-master replication. For this project we’ll use the latest Ubuntu release and we’re going to use RackSpace Cloud Networks to setup an isolated segment for the replication between the cluster nodes. We’ll do two nodes in the cluster but you can add additional nodes as desired. Finally we’ll use a RackSpace Cloud Load Balancer to distribute traffic between the nodes.

To get started, create the cloud servers from the RackSpace control panel using the following guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “Ubuntu 13.04 (Raring Ringtail)”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “DatabaseInternal”. When creating additional servers, make sure that you select the “DatabaseInternal” network.
  • If you need more storage then is included in the base image or if you want higher performance the consider using RackSpace Cloud Block Storage. This is not covered in this article.

For this article we’ll assume that the “DatabaseInternal” cloud network is:

and our servers are:

and:

Here are the basic instructions from Percona that we’ll be following:

http://www.percona.com/doc/percona-xtradb-cluster/installation.html

To get started open SSH terminal sessions to each server and complete the following steps on each server unless noted otherwise:

1. Disable apparmor – the Percona docs state that the cluster will not work with apparmor and it must be disabled. Here are the steps:

Now reboot the server to get a clean system without apparmor running.

2. Configure firewall – Next we need to get a basic firewall configured to protect the servers. Ubuntu includes UFW (Uncomplicated Firewall) so we’ll go with that:

If you’re not familier with UFW then here’s a good starting point:

https://help.ubuntu.com/community/UFW

3. /etc/hosts – Let’s add some entries to the /etc/hosts file:

4. Add Percona apt Repository – Just follow the instructions here:

http://www.percona.com/doc/percona-xtradb-cluster/installation/apt_repo.html

The exact lines for sources.list are:

Don’t forget to run:

5. Install packages – Run the following command to install the cluster packages:

You’ll be prompted by the installer to enter a root password for MySQL.

6. /etc/my.cnf – Setup configuration files on each server. The Percona distribution does not include a my.cnf file to you need to role your own. The minimal configuration would be something like this:

Most of this is straight from the Percona documentation. Key lines are:

This tells Percona what address to advertise to other nodes in the cluster. We need this set to the server “DatabaseInternal” interface. Without this setting SST will fail when it uses the wrong interface and is blocked by the firewall.

This line identifies at least one other member in the cluster. Notice that we’re using the names we set in the /etc/hosts file.

The above example for my.cnf is very minimal. It does not address any database memory or performance tuning issues so you’ll likely want to expand upon the example.

7. /root/.my.cnf – Add a .my.cnf for MySQL root authentication as described here:

http://blogs.reliablepenguin.com/2012/10/09/create-my-cnf-file-for-mysql-authentication

This step is optional. The .my.cnf is convenient but should not be used in high security environments.

8. Bootstrap Cluster – The cluster needs to be bootstrapped on the first server when it’s started for the first time. This can be accomplished with:

The subject of bootstrapping is covered in more detail here:

http://www.percona.com/doc/percona-xtradb-cluster/manual/bootstrap.html

The first node startup will look something like this:

Now with the first node started, open a mysql command shell and view the wsrep_cluster_% status variables:

Notice that the wsrep_cluster_size is 1 and the wsrep_cluster_status is “Primary”. This is normal for the first node in a newly bootstrapped cluster.

Now we’ll start Percona normally on each additional server. So go to db2 and run:

This time the start should look something like this:

Startup for the second node adds the SST or “State Snapshot Transfer”. In this step the servers will use rsync and SSH keys to transfer a copy of the the database from the first server to the second server.

Back on the first server in our mysql command shell we can check the status again:

Notice now according to the “wsrep_cluster_size” there are 2 nodes in the cluster.

At this point we have a functional cluster up and running.

If the second node fails to start then check the log file at:

/var/lib/mysql/db2.err

The most likely cause is a problem resolving or connecting to the db1-int server for SST.

9. Disable /etc/mysql/debian-start – The debian/ubuntu distribution of Percona includes a script at /etc/mysql/debian-start that checks and repairs tables on startup. We think this script is a bad idea in the cluster environment and should be disabled by running this command on each node:

10. Add extra functions – There are a couple of Percona specific functions that can be added to support monitoring:

11. Add load balancer – The next step is to add the RackSpace Cloud Load Balancer. The load balancer will provide a single IP address for clients to connect to. It will then distribute these connections to the nodes in the cluster.

a. Login to your RackSpace Cloud Control Panel.
b. Go to Hosting section and the Load Balancers tab.
c. Click the “Create Load Balancer” button.
d. In the “Identification” section, enter a name for the load balancer like “lb-db-01” and select the Region. Use the same Region that the cluster nodes are located in.
e. In the “Configuration” section, select “On the Private RackSpace Network” for the “Virtual IP”. Set the “Protocol” to “MySQL” and the port to “3306”. Set the “Algorithm” to “Least Connections”.
f. In the “Add Nodes” section, click the “Add Cloud Servers” button and select each of the servers in the cluster.
g. Click the “Create Load Balancer” button to save the new load balancer.

It may take a couple of minutes for the load balancer to be created. When complete the IP address assigned to the load balancer will be visible. We’ll assume for this article that the address is:

Notice that this is a private, unroutable address on the RackSpace Service Network. This address is not accessible from the public Internet but it is visible to other cloud servers and devices in the same region on the RackSpace Service network. This is the address that your web or application servers will use to connect to the database.

12. Load balancer access controls – To minimize exposure of the database servers we need to add access controls on the load balancer that will limit the range of addresses that are allowed to connect. Generally you’ll only want connections from your web or application servers. In the RackSpace Cloud control panel, drill down to your load balance, find “Access Control” rules at the bottom and add a rule or rules to allow your client servers. Of course if you’re dynamically adding and removing servers then it might not be possible to use these access controls. Or you might need to use the load balancer API to dynamically change access controls.

13. Allow load balancer on firewalls – Next we need to adjust the firewall on each node to allow MySQL connections from the load balancer. The connections from the load balancer to the cluster nodes do not come from the load balancer address. Instead they can come from a range of addresses. The exact range depends on what region the load balancer was created in. At the time of writing this article the ranges were:

For the DFW region, use:
10.183.248.0/22
10.189.254.0/23

For the IAD region, use:
10.189.254.0/23

For the ORD region, use:
10.183.250.0/23
10.183.252.0/23
10.183.254.0/23

For the LON region, use:
10.190.254.0/23

For the SYD region, use:
10.189.254.0/23

This list may change over time. The latest ranges should be available here:

http://www.rackspace.com/knowledge_center/article/using-cloud-load-balancers-with-rackconnect

For this article we used the DFW region so the UFW rules would be:

14. Create database & users – Now we’re ready to create a database and user for our web application. For any of the cluster nodes, open a MySQL command shell, create the database and add a database user:

Now from your web application should be able to connect to the database cluster with the user that you just create. The database host should be the address of the load balancer.

15. Raise timeout – By default Cloud Load Balancers will timeout any idle connection after 30 seconds. This article shows how to raise the timeout:

https://community.rackspace.com/products/f/25/t/89

16. Add monitoring tools – You’ll probably want to add a few tools to monitor the database. Here’s what we normally install:

myq_gadets is a great set of monitoring utilites:

http://blogs.reliablepenguin.com/wp-admin/post.php?post=818&action=edit&message=6

mysqltuner helps adjust memory allocations based on actual performance:

Percona Toolkit which can be downloaded from here:

http://www.percona.com/software/percona-toolkit

At this point you’re done! You have a working database cluster connected to your web application. Questions and comments are welcomed.

 

2

Secure Hosting On RackSpace Cloud

We’re going to build a highly secure hosting environment on RackSpace Cloud using Cloud Networks and the new Brocade Vyatta vRouter image. The diagrams shows the target configuration:

rscloudbrocade

There are 4 networks:

Public – this is the RackSpace Public Cloud/Internet network. The vRouter connects via eth0 to this network with public address of 1.1.1.1

LocalPrivate – this is a private Cloud Network that connects the vRouter and the servers. This network uses the private 192.168.3.0/24 subnet.

DBPrivate – this is a private Cloud Network that connects the web and database servers. This network uses the private 192.168.4.0/24 subnet.

Service – this is RackSpaces multi-tenent management network. This network is not being used and is not shown in the diagram.

On these networks are three servers:

Brocade Vyatta vRouter – this device provides a router, firewall and VPN termination point

Web Server – provides hosting for the web application.

Database Server – provides hosting for MySQL database used by web application

The Web and Database servers are not directly connected to the Internet. Instead all traffic passes through the vRouter. The vRouter provides a stateful inspection firewall to control access. There is also a private network (DBPrivate) between the web and database server.

Part 1 – Create Networks & Servers

To get started, login to your RackSpace Cloud account.

1. Verify that your account is enable for Cloud Networks and Brocade Vyatta vRouter. Go to Servers and click the Add Server button. Look in the list of available images and find the Brocade Vyatta vRouter. If it is not listed then open a support ticket and request access. Look towards the bottom of the New Server form and find the Cloud Networks section. If you don’t see this section then open a support ticket requesting access to Cloud Networks. Once you have verified that both Cloud Networks and Brocade Vyatta vRouter are available on your account then you can proceed to the next step.

2. Create a new server using the Brocade Vyatta vRouter image. The minimum size for the server should be 1GB. You can resize up later on as needed. You can not resize down. In the Cloud Networks section at the bottom of the form, add a new network named “LocalPrivate”.

3. Create the new database server using the latest CentOS 6.x image with at least 1GB of memory. In the Cloud Networks section, create a new network named “DBPrivate”. Also add the database server to the previously created “LocalPrivate” network.

4. Create the new web server using the latest CentOS 6.x image with at least 1GB of memory. In the Cloud Networks section, add the web server to the previously created “DBPrivate” and the “LocalPrivate” networks.

Take note the root passwords and IP addresses assigned to each server as they are created.

Part 2 – Configure vRouter

Connect to the vRouter with SSH. The login will be username “vyatta” and the password set when the server was created.

Execute the following commands in order. Be careful to replace bracketted items with your actual configuration values. Most lines can be copied and pasted to the vRouter including the comments.

 

Part 3 – Configure VPN Client

Next you can configure a client workstation. Here’s the procedure to configure a Windows client to connect to the VPN:

http://blogs.reliablepenguin.com/2013/08/22/windows-l2tpipsec-client-config

These instructions include an optional step to enable split tunneling. You’ll most likely want to complete this step so that only traffic for the secure hosts is routed on the VPN.

 

Part 4 – Configure Web Server

Initially the web server will not have a viable default gateway so it will not be accessible from the Internet or the VPN. So you’ll need to SSH to the vRouter and then SSH again into the web server. Open an SSH terminal to the vRouter and login as user “vyatta”. Now ssh from here to the web server:

We need to set a default gateway pointing to the vRouter. So edit /etc/sysconfig/network-scripts-ifcfg-eth2 and add this line to the bottom:

If you addresses or interfaces are numbered differently then you’ll need to adjust accordingly. The key issue is that the webserver needs to point to the internal LocalPrivate interface on the vRouter for it’s default gateway in order to route to the Internet.

 

Part 5 – Configure Database Server

We need to set a default gateway pointing to the vRouter. So edit /etc/sysconfig/network-scripts-ifcfg-eth1 and add this line to the bottom:

If you addresses or interfaces are numbered differently then you’ll need to adjust accordingly. The key issue is that the database server needs to point to the internal LocalPrivate interface on the vRouter for it’s default gateway in order to route to the Internet.

 

Notes

  • For database access from the webserver, make sure you use the DBPrivate network. I like to add an entry to the /etc/hosts file on the webserver that maps a name to the correct address on the database server like this:

 

References

5