Top Nav

GlusterFS Cluster with CentOS on RackSpace Cloud

GlusterFS Storage Cluster - New PageWe want to build a high-availability storage cluster based on GlusterFS using RackSpace Cloud Servers on CentOS. Also we want to use RackSpace Cloud Networks to provide private networks for replication and cluster access. Finally we want to use RackSpace Cloud Block Storage with LVM and XFS to simplify future expansion.

This article is based on a previous article we published on the same topic for Ubuntu:

GlusterFS Cluster With Ubuntu On RackSpace Cloud

As shown in the diagram, we’ll have two Cloud Networks, one for replication between the servers and one for access by clients to the storage.

CBS-LVM-XFS-GFSThe next diagrams shows the filesystem layers – local, CBS, LVM, XFS and finally GlusterFS.

Part 1 – Server Setup

1. Create servers and networks – Login to your RackSpace Cloud account and create two server following these guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “CentOS 6.4”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “StorageInternal”. Then create a second network named “StorageAccess” When creating additional servers, make sure that you select the “StorageInternal” and “StorageAccess” networks.
  • In the Advanced Options section under Disk Partition choose the “Manual” option so that we can partition and format the hard drive to our preferences.

Here are the addresses assigned to the servers that we created but yours my be different:


2. Open SSH shell to servers – Open an SSH shell to each of the servers. Unless noted otherwise the following steps should be repeated on both servers.

3. Upgrade packages – Update repository listings and then upgrade all packages to latest versions:

4. Setup /etc/hosts – Add hostnames on the “StorageInternal” network for each server.

5. Partition primary storage – Since we chose the “Manual” disk partition option, a 20GB root partition was created and the remainder of the primary storage was left unpartitioned. We need to partition this storage in preparation for later use in our LVM array.

Here’s our fdisk session

Since we changed the partition table on the root device a reboot is required to make the new partition visible. Continue once the reboot has been completed.

6. Add Cloud Block Storage – For this example, we need about 250GB of usable storage. There partition we just created has 20GB so we need to add about 225GB more using Cloud Block Storage.

a. Login to the cloud management console and drill down to the first server. Scroll down to the bottom of the page and click “Create Volume” in the “Storage Volumes” section.

b. Complete the Create Volume form. I like to include the hostname in the volume name along with an iterator so I used “fileserver1-1”. Set Type to “Standard” unless you need high performance and can afford the premium SSD dcrive. Enter the size which is 230GB for this example. Click “Create Volume” button to complete.

c. Click the “Attach Volume” button, select the volume and click the “Attach Volume” button.

d. Wait a few minutes for the storage to attach to the server.

From a terminal on the server use dmesg to see latest kernel messages:

At the bottom of the output you should see something like this:

So you can see that the new volume has been attached to the “xvdb” device. Your device name may be different.

e. Now partition the new storage as one large block. Here’s our fdisk session:

Repeat this process for the second server.

7. Setup LVM Partition – We now have 2 partitions:

/dev/xvda2 – 20GB
/dev/xvdb1 – 230GB

We’re going to use LVM to combine these two physical partitions into a single logical partition.

If you’re not familier with LVM then here’s a great tutorial:

A Beginner’s Guide To LVM

Here are the steps:

a. Install LVM and XFS:

b. Prepare the physical volume with pvcreate:

c. Add physical volumes to a “volume group” named “vg1”:

d. Create logical volume “gfs1” inside “vg1”:

e. Format the logical partition with XFS:

f. Create a mount point for the partition:

g. Add to /etc/fstab

h. Mount the new partition:

Repeat this step on both servers.

8. Install glusterfs yum repository – Run the following command to add the gluster repository to yum:

9. Install glusterfs software – Use the following command to install the cluster software:

10. Confirm gluster version – Make sure you have the correct version:

10. Firewall configuration – Next we need to get a basic firewall configured to protect the servers. CentOS uses bare IPtables so we just need to run the following:

This rule allows all traffic on eth4 which is the “StorageInternal” network.

Next we need to allow connections from clients. We’re going to restrict access based on IP address. You’ll need to repeat these commands for each different client. Let’s assume that we have a client with an IP address of x.x.x.x then add the following rules:

Now commit the changes with:

11. Start glusterd – Set glusterd to start on boot and start it for the first time:

11. Add nodes to cluster – On fileserver1, run this command to register fileserver2 to the trusted storage pool:

The results should look like this:

12. Check cluster status – On each node run “gluster peer status” and confirm the peer addresses. Here’s fileserver1:

and here’s fileserver2:

13. Create volume – Next we need to create the actual GlusterFS volume:

You should get back a response like:

14. Start the volume – Now start the volume to make it available to clients with:

You should get a response like:

The howto by Falko Timme mentioned at the start of this article gives some useful advice and tips on how to troubleshoot if the volume fails to create or start.

15. Verify volume status – Check the status of the volume with:

You should get a response similar to:

16. Allow client access – We want to allow all clients on the “StorageAccess” network access to the cluster:

Now show volume info again:

And you should get:

Setup on the server is complete. Now it’s time to add clients.

Part 2 – Client Setup

We’ll assume that the first client is also going to be CentOS.

1. Install glusterfs yum repository – Run the following command to add the gluster repository to yum:

2. Install glusterfs client – Execute the following command

3. Confirm version – Verify that you have the correct version installed:

4. Create mount point – Create a directory that will be the mount point for the gluster partition:

5. Setup /etc/hosts – Add the following entries to /etc/hosts:

Notice that we’re mapping both the StorageInternal and StorageAccess network names to the same IP addreses on the StorageAccess network.

6. Mount volume – Execute the command to mount the gluster volume:

7. Edit /etc/fstab – Add this line to /etc/fstab to make the mount start automatically on boot:

And that completes the client setup procedure. You now have a working GlusterFS storage cluster and a connected client.

Comments and suggestions are welcomed.