Top Nav

Archive | Cloud

Recap Script

Useful logging script from RackSpace:
https://github.com/rackerlabs/recap.git
 

See the readme file for instructions.

0

GlusterFS Cluster With Ubuntu On RackSpace Cloud

GlusterFS Storage Cluster - New PageWe want to build a high-availability storage cluster based on GlusterFS using RackSpace Cloud Servers on Ubuntu. Additionally we want to use RackSpace Cloud Networks to provide private networks for replication and cluster access.

There’s a great doc by Falko Timme on Howto Forge titled:

High-Availability Storage With GlusterFS 3.2.x On Ubuntu 11.10 – Automatic File Replication Across Two Storage Servers

We’re going to follow this doc with a few modifications for Cloud Networks and using a newer version of Gluster.

As shown in the diagram, we’ll have two Cloud Networks, one for replication between the servers and one for access by clients to the storage.

Part 1 – Server Setup

1. Create servers and networks – Login to your RackSpace Cloud account and create two server following these guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “Ubuntu 13.04 (Raring Ringtail)”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “StorageInternal”. Then create a second network named “StorageAccess” When creating additional servers, make sure that you select the “StorageInternal” and “StorageAccess” networks.
  • If you need more storage then is included in the base image or if you want higher performance the consider using RackSpace Cloud Block Storage. This is not covered in this article.

Here are the addresses assigned to the servers that we created but yours my be different:

and:

2. Open SSH shell to servers – Open an SSH shell to each of the servers. Unless noted otherwise the following steps should be repeated on both servers.

3. Upgrade packages – Update repository listings and then upgrade all packages to latest versions:

4. Setup /etc/hosts – Add hostnames on the “StorageInternal” network for each server.

5. Install glusterfs PPA repository – The default apt repositories for Ubuntu only offer version 3.2.7 of GlusterFS. We wanted a newer version so we’ll be modifying the install process to use the PPA distribution.  Execute the following commands

6. Install glusterfs server – Use this command to install:

7. Confirm gluster version – Make sure you have the correct version:

If you get an older version then you probably forgot to run “apt-get update”. Just remove the older package, update and reinstall:

8. Firewall configuration – Next we need to get a basic firewall configured to protect the servers. Ubuntu includes UFW (Uncomplicated Firewall) so we’ll go with that:

9. Add nodes to cluster – On fileserver1, run this command to register fileserver2 to the trusted storage pool:

The results should look like this:

10. Check cluster status – On each node run “gluster peer status” and confirm the peer addresses. Here’s fileserver1:

and here’s fileserver2:

11. Create volume – Next we need to create the actual GlusterFS volume:

You should get back a response like:

12. Start the volume – Now start the volume to make it available to clients with:

You should get a response like:

The howto by Falko Timme mentioned at the start of this article gives some useful advice and tips on how to troubleshoot if the volume fails to create or start.

13. Verify volume status – Check the status of the volume with:

You should get a response similar to:

14. Allow client access – We want to allow all clients on the “StorageAccess” network access to the cluster:

Now show volume info again:

And you should get:

Setup on the server is complete. Now it’s time to add clients.

Part 2 – Client Setup

1. Install glusterfs PPA repository – Execute the following commands

2. Install glusterfs client – Execute the following command:

3. Confirm version – Verify that you have the correct version installed:

root@db1:~# glusterfs –version
glusterfs 3.3.2 built on Jul 21 2013 16:38:01

4. Create mount point – Create a directory that will be the mount point for the gluster partition:

5. Setup /etc/hosts – Add the following entries to /etc/hosts:

Notice that we’re mapping both the StorageInternal and StorageAccess network names to the same IP addreses on the StorageAccess network.

6. Mount volume – Execute the command to mount the gluster volume:

7. Edit /etc/fstab – Add this line to /etc/fstab to make the mount start automatically on boot:

And that completes the client setup procedure. You now have a working GlusterFS storage cluster and a connected client.

Comments and suggestions are welcomed.

5

Percona XtraDB Cluster on Ubuntu

Percona XtraDB Cluster - New PageIn this article we’re going to build a Percona XtraDB Cluster using a pair of RackSpace Cloud Servers. Percona XtraDB Cluster is a MySQL compatible replacement supporting multi-master replication. For this project we’ll use the latest Ubuntu release and we’re going to use RackSpace Cloud Networks to setup an isolated segment for the replication between the cluster nodes. We’ll do two nodes in the cluster but you can add additional nodes as desired. Finally we’ll use a RackSpace Cloud Load Balancer to distribute traffic between the nodes.

To get started, create the cloud servers from the RackSpace control panel using the following guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “Ubuntu 13.04 (Raring Ringtail)”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “DatabaseInternal”. When creating additional servers, make sure that you select the “DatabaseInternal” network.
  • If you need more storage then is included in the base image or if you want higher performance the consider using RackSpace Cloud Block Storage. This is not covered in this article.

For this article we’ll assume that the “DatabaseInternal” cloud network is:

and our servers are:

and:

Here are the basic instructions from Percona that we’ll be following:

http://www.percona.com/doc/percona-xtradb-cluster/installation.html

To get started open SSH terminal sessions to each server and complete the following steps on each server unless noted otherwise:

1. Disable apparmor – the Percona docs state that the cluster will not work with apparmor and it must be disabled. Here are the steps:

Now reboot the server to get a clean system without apparmor running.

2. Configure firewall – Next we need to get a basic firewall configured to protect the servers. Ubuntu includes UFW (Uncomplicated Firewall) so we’ll go with that:

If you’re not familier with UFW then here’s a good starting point:

https://help.ubuntu.com/community/UFW

3. /etc/hosts – Let’s add some entries to the /etc/hosts file:

4. Add Percona apt Repository – Just follow the instructions here:

http://www.percona.com/doc/percona-xtradb-cluster/installation/apt_repo.html

The exact lines for sources.list are:

Don’t forget to run:

5. Install packages – Run the following command to install the cluster packages:

You’ll be prompted by the installer to enter a root password for MySQL.

6. /etc/my.cnf – Setup configuration files on each server. The Percona distribution does not include a my.cnf file to you need to role your own. The minimal configuration would be something like this:

Most of this is straight from the Percona documentation. Key lines are:

This tells Percona what address to advertise to other nodes in the cluster. We need this set to the server “DatabaseInternal” interface. Without this setting SST will fail when it uses the wrong interface and is blocked by the firewall.

This line identifies at least one other member in the cluster. Notice that we’re using the names we set in the /etc/hosts file.

The above example for my.cnf is very minimal. It does not address any database memory or performance tuning issues so you’ll likely want to expand upon the example.

7. /root/.my.cnf – Add a .my.cnf for MySQL root authentication as described here:

http://blogs.reliablepenguin.com/2012/10/09/create-my-cnf-file-for-mysql-authentication

This step is optional. The .my.cnf is convenient but should not be used in high security environments.

8. Bootstrap Cluster – The cluster needs to be bootstrapped on the first server when it’s started for the first time. This can be accomplished with:

The subject of bootstrapping is covered in more detail here:

http://www.percona.com/doc/percona-xtradb-cluster/manual/bootstrap.html

The first node startup will look something like this:

Now with the first node started, open a mysql command shell and view the wsrep_cluster_% status variables:

Notice that the wsrep_cluster_size is 1 and the wsrep_cluster_status is “Primary”. This is normal for the first node in a newly bootstrapped cluster.

Now we’ll start Percona normally on each additional server. So go to db2 and run:

This time the start should look something like this:

Startup for the second node adds the SST or “State Snapshot Transfer”. In this step the servers will use rsync and SSH keys to transfer a copy of the the database from the first server to the second server.

Back on the first server in our mysql command shell we can check the status again:

Notice now according to the “wsrep_cluster_size” there are 2 nodes in the cluster.

At this point we have a functional cluster up and running.

If the second node fails to start then check the log file at:

/var/lib/mysql/db2.err

The most likely cause is a problem resolving or connecting to the db1-int server for SST.

9. Disable /etc/mysql/debian-start – The debian/ubuntu distribution of Percona includes a script at /etc/mysql/debian-start that checks and repairs tables on startup. We think this script is a bad idea in the cluster environment and should be disabled by running this command on each node:

10. Add extra functions – There are a couple of Percona specific functions that can be added to support monitoring:

11. Add load balancer – The next step is to add the RackSpace Cloud Load Balancer. The load balancer will provide a single IP address for clients to connect to. It will then distribute these connections to the nodes in the cluster.

a. Login to your RackSpace Cloud Control Panel.
b. Go to Hosting section and the Load Balancers tab.
c. Click the “Create Load Balancer” button.
d. In the “Identification” section, enter a name for the load balancer like “lb-db-01” and select the Region. Use the same Region that the cluster nodes are located in.
e. In the “Configuration” section, select “On the Private RackSpace Network” for the “Virtual IP”. Set the “Protocol” to “MySQL” and the port to “3306”. Set the “Algorithm” to “Least Connections”.
f. In the “Add Nodes” section, click the “Add Cloud Servers” button and select each of the servers in the cluster.
g. Click the “Create Load Balancer” button to save the new load balancer.

It may take a couple of minutes for the load balancer to be created. When complete the IP address assigned to the load balancer will be visible. We’ll assume for this article that the address is:

Notice that this is a private, unroutable address on the RackSpace Service Network. This address is not accessible from the public Internet but it is visible to other cloud servers and devices in the same region on the RackSpace Service network. This is the address that your web or application servers will use to connect to the database.

12. Load balancer access controls – To minimize exposure of the database servers we need to add access controls on the load balancer that will limit the range of addresses that are allowed to connect. Generally you’ll only want connections from your web or application servers. In the RackSpace Cloud control panel, drill down to your load balance, find “Access Control” rules at the bottom and add a rule or rules to allow your client servers. Of course if you’re dynamically adding and removing servers then it might not be possible to use these access controls. Or you might need to use the load balancer API to dynamically change access controls.

13. Allow load balancer on firewalls – Next we need to adjust the firewall on each node to allow MySQL connections from the load balancer. The connections from the load balancer to the cluster nodes do not come from the load balancer address. Instead they can come from a range of addresses. The exact range depends on what region the load balancer was created in. At the time of writing this article the ranges were:

For the DFW region, use:
10.183.248.0/22
10.189.254.0/23

For the IAD region, use:
10.189.254.0/23

For the ORD region, use:
10.183.250.0/23
10.183.252.0/23
10.183.254.0/23

For the LON region, use:
10.190.254.0/23

For the SYD region, use:
10.189.254.0/23

This list may change over time. The latest ranges should be available here:

http://www.rackspace.com/knowledge_center/article/using-cloud-load-balancers-with-rackconnect

For this article we used the DFW region so the UFW rules would be:

14. Create database & users – Now we’re ready to create a database and user for our web application. For any of the cluster nodes, open a MySQL command shell, create the database and add a database user:

Now from your web application should be able to connect to the database cluster with the user that you just create. The database host should be the address of the load balancer.

15. Raise timeout – By default Cloud Load Balancers will timeout any idle connection after 30 seconds. This article shows how to raise the timeout:

https://community.rackspace.com/products/f/25/t/89

16. Add monitoring tools – You’ll probably want to add a few tools to monitor the database. Here’s what we normally install:

myq_gadets is a great set of monitoring utilites:

http://blogs.reliablepenguin.com/wp-admin/post.php?post=818&action=edit&message=6

mysqltuner helps adjust memory allocations based on actual performance:

Percona Toolkit which can be downloaded from here:

http://www.percona.com/software/percona-toolkit

At this point you’re done! You have a working database cluster connected to your web application. Questions and comments are welcomed.

 

2

Sync MySQL Slave From Another Slave On Amazon AWS

I was looking for a procedure to setup a new MySQL replication slave on Amazon AWS without having to do a MySQL dump of the master. Ended up using a couple of AWS features to make this an easy process. Let’s assume that we have a master server (master01) and a slave (slave01). We want to create a new slave (slave02). We’ll do this without taking the master offline and without doing a MySQL dump. This is especially important with large databases.

A couple of notes before we start:

  • On AWS we put MySQL storage on it’s own EBS storage volume. We create a new empty volume, partition, add LVM and then format with xfs. This makes it easy to add additional storage in the future.
  • We prefer to use the innodb-file-per-table setting in my.cnf to force MySQL to use a separate file for each InnoDB table. This keeps all the files that belong to a database in a common folder instead of mixing multiple databases in a single file. Also in may help avoid having very large database files.
  • We’re assuming that you’ve already setup the slave02 EC2 instance. You might do this by cloning the server01 instance.

Now here’s my procedure:

1. Login to AWS and go to EC2 -> Volumes. Find the volume holding the MySQL data. If you’ve not already done so we would recommend adding a Name tag to the volume so that you can easily see which server/function it is associated with. For example you might name it “slave01-mysql”. Right click on the volume and select “Create Snapshot”. A dialog will open where you can name the snapshot … something like “slave01-mysql-snap-1” would be appropriate. Depending on the size of the volume, this snapshot will take some time to complete.

One of the greatest features of EBS snapshots is their incremental nature. Here’s how it’s described by Amazon:

Amazon EBS snapshots are incremental backups, meaning that only the blocks on the device that have changed since your last snapshot will be saved. If you have a device with 100 GBs of data, but only 5 GBs of data has changed since your last snapshot, only the 5 additional GBs of snapshot data will be stored back to Amazon S3.

So the first snapshot we take of a volume has to copy the entire volume but additional snapshots only copy changed blocks. We’re going to use this is next step. At this point we have a snapshot of the server01 slave but it is inconsistent and we don’t have the point-in-time log coordinates that we need to initialize a new slave.

2. Login to slave01, stop replication and apply a read lock as follows:

Make sure that the “flush tables …” completes before proceeding. It may take a few seconds or more if there are locked tables that have to be released before the lock can be completed. At this point the slave server is not fully functional. You’ll want to proceed quickly through the next steps.  Until the next step (3) is complete, do not close the terminal with the “flush tables …” as we need to keep the read lock active.

3. Open a second terminal session to slave01, record log coordinates and shutdown the mysql process as follows:

The slave status will look something like:

 

The columns that we need to record are:

Exec_Master_Log_Pos

Master_Log_File

Take note of these values for later use.

The slave01 database is now stopped and we have the log coordinates from right before the shutdown.

4. Next in AWS, start a new snapshot of the slave01-mysql volume. Name it slave01-mysql-snap-2 or something similar. This snapshot will run much faster since it is only copying blocks that changed since the first snapshot.

5. When the second snapshot is complete, start the mysql process on the slave01 server. Check replication and confirm that it comes back in sync with the master. The slave01 is now back to a functional status.

6. In AWS, right click on the slave01-mysql-snap-2 snapshot and select “Create volume from snapshot”. Name the new volume something like slave02-mysql.

Another great feature of EBS snapshots is “lazy loading” as described here:

New volumes created from existing Amazon S3 snapshots load lazily in the background. This means that once a volume is created from a snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to your Amazon EBS volume before your attached instance can start accessing the volume and all of its data. If your instance accesses a piece of data which hasn’t yet been loaded, the volume will immediately download the requested data from Amazon S3, and then will continue loading the rest of the volume’s data in the background.

This means that we can proceed with the new slave setup without waiting for the new volume to be loaded … instead it will load as needed.

7. In AWS, right click on the newly created volume and attach it to the new slave02 instance.

8. Login to the slave02 instance, confirm that mysql is stopped and mount the new volume.

9. Edit /etc/my.conf and add “skip-slave-start” to the “[mysqld]” section.

10. In the mysql data directory (/var/lib/mysql by default), remove the following files:

  • master.info
  • relay logs  (mysqld-relay-bin.*)
  • relay-log.info

11. Start mysql server

12. Set replication config:

13. Now start replication:

It may take a few minutes or more for replication on the new slave to catch up with the master. This is due to the “lazy load” on the volume created from the snapshot and the need to process the logs from the master.

Once the new slave catches up with the master then the task is complete and you have a new slave. This procedure can work with very large databases. There is no downtime on the master for this process. The existing slave is down for only a short period.

0

Secure Hosting On RackSpace Cloud

We’re going to build a highly secure hosting environment on RackSpace Cloud using Cloud Networks and the new Brocade Vyatta vRouter image. The diagrams shows the target configuration:

rscloudbrocade

There are 4 networks:

Public – this is the RackSpace Public Cloud/Internet network. The vRouter connects via eth0 to this network with public address of 1.1.1.1

LocalPrivate – this is a private Cloud Network that connects the vRouter and the servers. This network uses the private 192.168.3.0/24 subnet.

DBPrivate – this is a private Cloud Network that connects the web and database servers. This network uses the private 192.168.4.0/24 subnet.

Service – this is RackSpaces multi-tenent management network. This network is not being used and is not shown in the diagram.

On these networks are three servers:

Brocade Vyatta vRouter – this device provides a router, firewall and VPN termination point

Web Server – provides hosting for the web application.

Database Server – provides hosting for MySQL database used by web application

The Web and Database servers are not directly connected to the Internet. Instead all traffic passes through the vRouter. The vRouter provides a stateful inspection firewall to control access. There is also a private network (DBPrivate) between the web and database server.

Part 1 – Create Networks & Servers

To get started, login to your RackSpace Cloud account.

1. Verify that your account is enable for Cloud Networks and Brocade Vyatta vRouter. Go to Servers and click the Add Server button. Look in the list of available images and find the Brocade Vyatta vRouter. If it is not listed then open a support ticket and request access. Look towards the bottom of the New Server form and find the Cloud Networks section. If you don’t see this section then open a support ticket requesting access to Cloud Networks. Once you have verified that both Cloud Networks and Brocade Vyatta vRouter are available on your account then you can proceed to the next step.

2. Create a new server using the Brocade Vyatta vRouter image. The minimum size for the server should be 1GB. You can resize up later on as needed. You can not resize down. In the Cloud Networks section at the bottom of the form, add a new network named “LocalPrivate”.

3. Create the new database server using the latest CentOS 6.x image with at least 1GB of memory. In the Cloud Networks section, create a new network named “DBPrivate”. Also add the database server to the previously created “LocalPrivate” network.

4. Create the new web server using the latest CentOS 6.x image with at least 1GB of memory. In the Cloud Networks section, add the web server to the previously created “DBPrivate” and the “LocalPrivate” networks.

Take note the root passwords and IP addresses assigned to each server as they are created.

Part 2 – Configure vRouter

Connect to the vRouter with SSH. The login will be username “vyatta” and the password set when the server was created.

Execute the following commands in order. Be careful to replace bracketted items with your actual configuration values. Most lines can be copied and pasted to the vRouter including the comments.

 

Part 3 – Configure VPN Client

Next you can configure a client workstation. Here’s the procedure to configure a Windows client to connect to the VPN:

http://blogs.reliablepenguin.com/2013/08/22/windows-l2tpipsec-client-config

These instructions include an optional step to enable split tunneling. You’ll most likely want to complete this step so that only traffic for the secure hosts is routed on the VPN.

 

Part 4 – Configure Web Server

Initially the web server will not have a viable default gateway so it will not be accessible from the Internet or the VPN. So you’ll need to SSH to the vRouter and then SSH again into the web server. Open an SSH terminal to the vRouter and login as user “vyatta”. Now ssh from here to the web server:

We need to set a default gateway pointing to the vRouter. So edit /etc/sysconfig/network-scripts-ifcfg-eth2 and add this line to the bottom:

If you addresses or interfaces are numbered differently then you’ll need to adjust accordingly. The key issue is that the webserver needs to point to the internal LocalPrivate interface on the vRouter for it’s default gateway in order to route to the Internet.

 

Part 5 – Configure Database Server

We need to set a default gateway pointing to the vRouter. So edit /etc/sysconfig/network-scripts-ifcfg-eth1 and add this line to the bottom:

If you addresses or interfaces are numbered differently then you’ll need to adjust accordingly. The key issue is that the database server needs to point to the internal LocalPrivate interface on the vRouter for it’s default gateway in order to route to the Internet.

 

Notes

  • For database access from the webserver, make sure you use the DBPrivate network. I like to add an entry to the /etc/hosts file on the webserver that maps a name to the correct address on the database server like this:

 

References

5