Top Nav

Archive | Uncategorized

MySQL Multi Master HA Cluster

The goal of this article is to setup an HA Linux cluster on RackSpace Cloud Servers for MySQL with multi master replication and HAProxy for load balancing.

Begin by creating the HA Linux client as described here:

HA Linux Cluster On RackSpace Cloud Servers

Once this is complete then return to these instructions.

1. Begin by logging into each of the servers and creating a replication slave user with the following SQL commands:

Where [PASSWORD] is a strong random string. Use the same password on each of the servers.

2. Stop mysqld on both servers:

3. On each server, edit the /etc/my.cnf file. Look through the my.cnf file for an existing section of replication options. If none is found then just add to the end of the file.

a. Find the “server-id” line and make sure that each server has a unique server-id. It does not matter what the id number is … it just has to be different on each server.

b. Set replicate-same-server-id to 0. This tells each server to ignore replication logs that refer to it’s on id number.

c. Set auto-increment-increment to 2. If you have more then two servers in your cluster then set auto-increment-increment to the number of servers. If you think that you might increase the number of servers in the near future then set auto-increment-increment to the likely maximum number of servers. This parameter controls the span between auto-increment values. So for example a setting of 2 causes a sequence like 2,4,6,6, etc. A value of 5 would cause a sequence of 5,10,15,20, etc.

d. Set auto-increment-offset to 1 on the first server and 2 on the second server. auto-increment-offset works in conjunction with auto-increment-increment to control the generation of auto-increment values. Each server needs a different auto-increment-offset to avoid conflicts. Assuming two servers, with auto-increment-increment set to 2 and auto-increment-offset set to 1 and 2 on the first and second server respectively will result in the following sequences:

e. Now on each server, add master-host, master-user, master-password, and master-connect-retry settings. Remember that the master-host should be set to the other server. I like to use hostnames as long as they’ve been defined in /etc/hosts so that there is no dependency on external name-resolution.

Here’s the first server:

And here’s the second server:

f. Uncomment or add a log-bin line:

g. Setup expire_logs_days and max_binlog_size so that the binary logs don’t grow in an uncontrolled fashion.

h. Uncomment or add relay-log and relay-log-index lines:

4. Next start the mysqld service on both servers:

5. Now login to the mysql command line on each server and run “SHOW SLAVE STATUS”. Examine the output and verify that “Slave_IO_Running” and “Slave_SQL_Running” are both “YES”.

Congratulations, you now have a high-availability MySQL cluster!

Special thanks to Richard Benson at Dixcart whose similar articles for Debian inspired this effort.

8

HA Linux Cluster On RackSpace Cloud Servers

Our goal is to setup a pair of RackSpace Cloud Servers in a redundant cluster using a shared IP address. We’ll use the “heartbeat” package from Linux-HA (http://www.linux-ha.org) for the cluster messaging layer and “pacemaker” package from ClusterLabs (http://clusterlabs.org) for the cluster resource manager.

Before starting this procedure you’ll need to:

a. Create the two cloud servers. These instructions are specific to CentOS for the operating system.

b. Open a ticket with RackSpace Cloud support and request a public IP address to be shared between the servers.

You can use the instructions for other situations but you’ll need to make the appropriate adjustments.

1. Setup hosts file entries. On each server, edit /etc/hosts and add entries for each servers public and private interfaces. You’ll also find it convenient to setup ssh keys between the servers for easy access.

2. Now use yum to install some prerequisite packages:

Repeat this step on the second server.

Note: Several of these packages are not available on the standard RHEL yum channels. If you’re working on something other then a RackSpace Cloud server then you might need to install the EPEL channel. Just go to:

http://fedoraproject.org/wiki/EPEL

Then download and install the appropriate package to add EPEL.

3. The version of heartbeat available in the standard yum repositories is outdated. So we’ll install a more recent version of heartbeat, pacemaker and supporting components from:

http://www.clusterlabs.org/rpm

Start by creating a working folder:

Then use wget to download the latest version of each of the following package:

Finally install the packages:

Repeat this step on the second server.

5. Next step is to configure heartbeat.

a. Setup keys for authentication between the instances.

Edit /etc/ha.d/authkeys and add:

Replace [PASSWORD] with a long random string.

b. Set permissions on the authkeys file:

c. Next edit /etc/ha.d/ha.cf and add the following:

Set [HOST1] and [HOST2] to the hostnames of the servers.

Set [INTERNAL IP OF HOST2] to the private IP address of the second server.

Repeat these steps on the second server. When you create the ha.cf file for the second server, you’ll use the internal IP of the first server in the ucast line.

d. Setup logd for automatic startup:

Now repeat this procedure on the second server but make sure you set the internal IP of the first server in the ha.cf file.

6. Finally start the heartbeat and logd service on both servers:

7. The next step is to configure pacemaker.

Run the pacemaker configuration tool. It is called “crm”. You’ll use it to configure “resources” which in this case is a shared IP.

If you get an error like “cibadmin not available, check your installation” when trying to run crm, then make sure that the “which” package is installed and that /usr/sbin is in your path.

Now enter the following into the pacemaker shell:

Where [SHARED_IP] is the IP address to be shared between the servers and [HOST1] is the hostname of the primary server.

Once this is done then you should be able to monitor the status of the cluster from either node using the crm_mon command. You’ll get output like this:

8. Next step is to test failover on the servers.

a. Run crm_mon on the second server.

b. Reboot the first server:

c. Monitor the second server and notice that when the first goes offline, the “shared_one_ip” is switched to the second server. After the first server finishes rebooting then you should see it come back online and “shared_one_ip” return to it’s original location on the first server.

d. Repeat this test but reboot the second server and monitor the first.

And that completes the setup process. You now have an HA Linux cluster on the cloud!

10

Reverse Proxy With Content Rewrites

One of our clients is a school district that wanted to make content on http://www.khanacademy.org/ (KA) available to users without Youtube. The KA site uses a bit of javascript to test if the user’s browser can get to youtube and if not it servers videos from an different source. Unfortunately this was not working for the client.

To solve the problem we setup mod_proxy as a reverse proxy. Then we used mod_headers, mod_filter and mod_substitute to rewrite the javascript going to client and force the use of the youtube alternative.

Here’s the apache config that does the content rewrite. It’s located in the reverse proxy virtual host:

2

IPTables – Filter ICMP Address Mask Request & Replies

Here’s how to filter or block ICMP address mask requests and replies.

On Redhat/CentOS, edit /etc/sysconfig/iptables and add the following lines

-A RH-Firewall-1-INPUT -p ICMP –icmp-type address-mask-request -j DROP
-A RH-Firewall-1-INPUT -p ICMP –icmp-type address-mask-reply -j DROP

and then run:

/sbin/iptables restart

Or run the following commands:

/sbin/iptables -I RH-Firewall-1-INPUT 1 -p ICMP –icmp-type address-mask-request -j DROP
/sbin/iptables -I RH-Firewall-1-INPUT 1 -p ICMP –icmp-type address-mask-reply -j DROP
/sbin/service iptables save

Recently on an Ubantu server we just added these lines to /etc/rc.local:

/sbin/iptables -I INPUT 1 -p ICMP –icmp-type address-mask-request -j DROP
/sbin/iptables -I INPUT 1 -p ICMP –icmp-type address-mask-reply -j DROP

0

RewriteCond -d/-f Not Working With HTTP Basic Auth.

Lets say you have this is the document root .htaccess:

This is the kind of rewrite that WordPress, Mambo and others use to provide SEO urls.

Now create a folder in the document root and add a .htaccess to the folder with commands to require HTTP Basic authentication.

Requests to the folder will end up being sent to /index.php and the application will generate a 404 error.

The fix is to change the rewrite rules to:

This allows the invisible /401.shtml request needed for authentication to skip the rewrite rule and function corrrectly.

1