Top Nav

Archive | Uncategorized

Content Updates With W3 Total Cache & RackSpace Cloud Files

We get frequent questions from WordPress clients with W3 Total Cache (W3TC) and RackSpace Cloud Files (RCF) content distribution network (CDN) about how to make changes to the website visible.

First some general notes:

  1. Content changes or updates to pages or posts do not involve the CDN so no special actions are required. W3TC will automatically flush old pages and post revisions from it’s caches.
  2. Media files uploaded through WordPress admin will be automatically uploaded to the CDN and can be immediately used in post or pages with no additional action.
  3. Files that are changed via FTP will required additional action. This included media files, CSS and Javascript as discussed below.

As an example, let’s assume that you’ve modified you themes primary stylesheet at:

wp-content/themes/mytheme/style.css

You made the modification by downloading the file, editing on your workstation and then uploading via SFTP back to the server. In this case W3TC has no way of knowing that you’ve made a change to the file so it does not know that the file needs to be uploaded to the CDN.

First you’ll need to sync or upload the file to the CDN:

  1. Login to WordPress admin.
  2. Go to Performance -> CDN
  3. Click the appropriate upload button in the General section. The button will depend on what type of file you’ve changed. In this example we need to click the “Upload theme files” button. W3TC_CDN_General
  4. A popup window will open when you click the button similar to this: W3TC_CDN_UploadIn the popup window, click the “Start” button. The sync process will start and you’ll be able to see as each file is examined. If the file has not changed since the last sync the it will be marked “Object up-to-date”. If the file is now or has changed then it will be marked with “Ok”. W3TC_CDN_Upload2
  5. After the sync process completes you can close the popup window. If you have many files it may take several minutes to several hours to fully sync.

Now the new or changed file has been uploaded to the CDN. If it’s a new file then it will be immediately available for use in your site.  If instead you are changing a file then we have to take an additional step and “purge” the file from the CDN.

CDN’s function by distribution content to access points close to the user. For large CDN’s there might be dozens or even hundreds of access points. When a user requests a file, the request is routed to the closest access point. The CDN infrequently check back with your web server for changed or updated files. The frequency of these checks is call the “Time to Live” or TTL. By default RCF sets a 24 hour TTL on all files. So after uploading a change, it may take up to 24 hours before the change if visible on on access points. This can be a problem for our CSS change as users on different access point might get different versions of the file.

W3TC includes a “purge” feature which you can use as follows:

  1. Login to WordPress admin
  2. Go to Performance -> CDN
  3. Look for the Purge button above the General section. Click the Purge button to open the CDN Purge Tool popup window. W3TC_CDN_Purge1
  4. Enter the relative paths of each file that you want to purge into the “Files to purge” field.
  5. Click the Purge button and wait for the process to complete.W3TC_CDN_Purge2
  6. Close the CDN Purge Tool window.

Purging on the CDN is not an instantaneous process. I may take up to about a half hour for all access point to receive and honor the purge request but this is still much faster then the 24 hour TTL.

We have had mixed success with the W3TC CDN Purge Tool. If I need a change purged as fast as possible then I prefer to run the purge from the RackSpace Cloud Files control panel as follows:

  1. Login to https://mycloud.rackspace.com
  2. Go to Storage -> Files
  3. Drill down on your container and then to the file that you want to purge.
  4. Click on the Gear icon next to the file to open the drop down menu. RSCF_Purge1
  5. Select “Refresh File (Purge)” from the menu. A small confirmation dialog will open. RSCF_Purge2
  6. Click “Purge File” button in the dialog to complete the process.

Again purging on the CDN is not an instantaneous process. I may take up to about a half hour for all access point to receive and honor the purge request.

Is there anyway to make updates to the CDN faster?

Short answer is “no”. The TTL and propagation delay is the nature of the CDN. Generally the benefits of distributing content win out over the update delays.

But two notes:

  1. New files do not experience any propagation delay. If you want to change a file and have it propagate instantly then change the name of the file so it is seen as a new file by the CDN.
  2. You can lower the TTL of the CDN container. If you have a big setup of updates planned, then it might be a good idea to lower the TTL a few days in advance so that you minimize propagation delay.

 

 

0

GlusterFS Cluster with CentOS on RackSpace Cloud

GlusterFS Storage Cluster - New PageWe want to build a high-availability storage cluster based on GlusterFS using RackSpace Cloud Servers on CentOS. Also we want to use RackSpace Cloud Networks to provide private networks for replication and cluster access. Finally we want to use RackSpace Cloud Block Storage with LVM and XFS to simplify future expansion.

This article is based on a previous article we published on the same topic for Ubuntu:

GlusterFS Cluster With Ubuntu On RackSpace Cloud

As shown in the diagram, we’ll have two Cloud Networks, one for replication between the servers and one for access by clients to the storage.

CBS-LVM-XFS-GFSThe next diagrams shows the filesystem layers – local, CBS, LVM, XFS and finally GlusterFS.

Part 1 – Server Setup

1. Create servers and networks – Login to your RackSpace Cloud account and create two server following these guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “CentOS 6.4”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “StorageInternal”. Then create a second network named “StorageAccess” When creating additional servers, make sure that you select the “StorageInternal” and “StorageAccess” networks.
  • In the Advanced Options section under Disk Partition choose the “Manual” option so that we can partition and format the hard drive to our preferences.

Here are the addresses assigned to the servers that we created but yours my be different:

and:

2. Open SSH shell to servers – Open an SSH shell to each of the servers. Unless noted otherwise the following steps should be repeated on both servers.

3. Upgrade packages – Update repository listings and then upgrade all packages to latest versions:

4. Setup /etc/hosts – Add hostnames on the “StorageInternal” network for each server.

5. Partition primary storage – Since we chose the “Manual” disk partition option, a 20GB root partition was created and the remainder of the primary storage was left unpartitioned. We need to partition this storage in preparation for later use in our LVM array.

Here’s our fdisk session

Since we changed the partition table on the root device a reboot is required to make the new partition visible. Continue once the reboot has been completed.

6. Add Cloud Block Storage – For this example, we need about 250GB of usable storage. There partition we just created has 20GB so we need to add about 225GB more using Cloud Block Storage.

a. Login to the cloud management console and drill down to the first server. Scroll down to the bottom of the page and click “Create Volume” in the “Storage Volumes” section.

b. Complete the Create Volume form. I like to include the hostname in the volume name along with an iterator so I used “fileserver1-1”. Set Type to “Standard” unless you need high performance and can afford the premium SSD dcrive. Enter the size which is 230GB for this example. Click “Create Volume” button to complete.

c. Click the “Attach Volume” button, select the volume and click the “Attach Volume” button.

d. Wait a few minutes for the storage to attach to the server.

From a terminal on the server use dmesg to see latest kernel messages:

At the bottom of the output you should see something like this:

So you can see that the new volume has been attached to the “xvdb” device. Your device name may be different.

e. Now partition the new storage as one large block. Here’s our fdisk session:

Repeat this process for the second server.

7. Setup LVM Partition – We now have 2 partitions:

/dev/xvda2 – 20GB
/dev/xvdb1 – 230GB

We’re going to use LVM to combine these two physical partitions into a single logical partition.

If you’re not familier with LVM then here’s a great tutorial:

A Beginner’s Guide To LVM

Here are the steps:

a. Install LVM and XFS:

b. Prepare the physical volume with pvcreate:

c. Add physical volumes to a “volume group” named “vg1”:

d. Create logical volume “gfs1” inside “vg1”:

e. Format the logical partition with XFS:

f. Create a mount point for the partition:

g. Add to /etc/fstab

h. Mount the new partition:

Repeat this step on both servers.

8. Install glusterfs yum repository – Run the following command to add the gluster repository to yum:

9. Install glusterfs software – Use the following command to install the cluster software:

10. Confirm gluster version – Make sure you have the correct version:

10. Firewall configuration – Next we need to get a basic firewall configured to protect the servers. CentOS uses bare IPtables so we just need to run the following:

This rule allows all traffic on eth4 which is the “StorageInternal” network.

Next we need to allow connections from clients. We’re going to restrict access based on IP address. You’ll need to repeat these commands for each different client. Let’s assume that we have a client with an IP address of x.x.x.x then add the following rules:

Now commit the changes with:

11. Start glusterd – Set glusterd to start on boot and start it for the first time:

11. Add nodes to cluster – On fileserver1, run this command to register fileserver2 to the trusted storage pool:

The results should look like this:

12. Check cluster status – On each node run “gluster peer status” and confirm the peer addresses. Here’s fileserver1:

and here’s fileserver2:

13. Create volume – Next we need to create the actual GlusterFS volume:

You should get back a response like:

14. Start the volume – Now start the volume to make it available to clients with:

You should get a response like:

The howto by Falko Timme mentioned at the start of this article gives some useful advice and tips on how to troubleshoot if the volume fails to create or start.

15. Verify volume status – Check the status of the volume with:

You should get a response similar to:

16. Allow client access – We want to allow all clients on the “StorageAccess” network access to the cluster:

Now show volume info again:

And you should get:

Setup on the server is complete. Now it’s time to add clients.

Part 2 – Client Setup

We’ll assume that the first client is also going to be CentOS.

1. Install glusterfs yum repository – Run the following command to add the gluster repository to yum:

2. Install glusterfs client – Execute the following command

3. Confirm version – Verify that you have the correct version installed:

4. Create mount point – Create a directory that will be the mount point for the gluster partition:

5. Setup /etc/hosts – Add the following entries to /etc/hosts:

Notice that we’re mapping both the StorageInternal and StorageAccess network names to the same IP addreses on the StorageAccess network.

6. Mount volume – Execute the command to mount the gluster volume:

7. Edit /etc/fstab – Add this line to /etc/fstab to make the mount start automatically on boot:

And that completes the client setup procedure. You now have a working GlusterFS storage cluster and a connected client.

Comments and suggestions are welcomed.

0

Percona XtraDB Cluster On CentOS

Percona XtraDB Cluster - New Page

This article is built off of a similar article we published last month:

Percona XtraDB Cluster on Ubuntu

The primary difference is this time we’re going to use CentOS instead of Ubuntu.

In this article we’re going to build a Percona XtraDB Cluster using a pair of RackSpace Cloud Servers. Percona XtraDB Cluster is a MySQL compatible replacement supporting multi-master replication. For this project we’ll use the latest CentOS release and we’re going to use RackSpace Cloud Networks to setup an isolated segment for the replication between the cluster nodes. We’ll do two nodes in the cluster but you can add additional nodes as desired. Finally we’ll use a RackSpace Cloud Load Balancer to distribute traffic between the nodes.

To get started, create the cloud servers from the RackSpace control panel using the following guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “CentOS 6.4”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “DatabaseInternal”. When creating additional servers, make sure that you select the “DatabaseInternal” network.
  • If you need more storage then is included in the base image or if you want higher performance the consider using RackSpace Cloud Block Storage. This is not covered in this article.

For this article we’ll assume that the “DatabaseInternal” cloud network is:

and our servers are:

and:

Here are the basic instructions from Percona that we’ll be following:

http://www.percona.com/doc/percona-xtradb-cluster/installation.html

To get started open SSH terminal sessions to each server and complete the following steps on each server unless noted otherwise:

 

1. Disable SELinux – the Percona docs state that the cluster will not work with SELinux and it must be disabled. CentOS on RackSpace Cloud install with SELinux disabled by default. You can confirm this with:

The response should be:

If SELinux is enabled then follow these instructions to disable:

http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-enable-disable.html

Now reboot the server to get a clean system without SELinux.

2. Configure firewall – Next we need to get a basic firewall configured to protect the servers. CentOS uses iptables with no frontend so we’ll edit:

/etc/sysconfig/iptables

Add the two lines highlighted in bold:

The first new rule for eth1 allows all traffic between the database servers on the “DatabaseInternal” network. The second rule allows connections to TCP 3306 (mysql) on the eth0 network which is the public interface. You might want to change this rule and instead limit access to specific IP addresses like this:

Replace x.x.x.x with the IP address of the client (web) server.

Now restart the service to apply changes:

3. /etc/hosts – Let’s add some entries to the /etc/hosts file:

4. Add Percona yum Repository – Just follow the instructions here:

http://www.percona.com/doc/percona-xtradb-cluster/installation/yum_repo.html

Basically just execute:

5. Install packages – Run the following command to install the cluster packages:

6. /etc/my.cnf – Setup configuration files on each server. The Percona distribution does not include a my.cnf file to you need to role your own. The minimal configuration would be something like this:

Most of this is straight from the Percona documentation. Key lines are:

This tells Percona what address to advertise to other nodes in the cluster. We need this set to the server “DatabaseInternal” interface. Without this setting SST will fail when it uses the wrong interface and is blocked by the firewall.

This line identifies at least one other member in the cluster. Notice that we’re using the names we set in the /etc/hosts file.

The above example for my.cnf is very minimal. It does not address any database memory or performance tuning issues so you’ll likely want to expand upon the example.

7. Bootstrap Cluster – The cluster needs to be bootstrapped on the first server when it’s started for the first time. This can be accomplished with:

The subject of bootstrapping is covered in more detail here:

http://www.percona.com/doc/percona-xtradb-cluster/manual/bootstrap.html

The first node startup will look something like this:

8. Secure mysql installation

9. /root/.my.cnf – Add a .my.cnf for MySQL root authentication as described here:

http://blogs.reliablepenguin.com/2012/10/09/create-my-cnf-file-for-mysql-authentication

This step is optional. The .my.cnf is convenient but should not be used in high security environments.

10. Start additional nodes

Now with the first node started, open a mysql command shell and view the wsrep_cluster_% status variables:

Notice that the wsrep_cluster_size is 1 and the wsrep_cluster_status is “Primary”. This is normal for the first node in a newly bootstrapped cluster.

Now we’ll start Percona normally on each additional server. So go to db2 and run:

This time the start should look something like this:

Startup for the second node adds the SST or “State Snapshot Transfer”. In this step the servers will use rsync and SSH keys to transfer a copy of the the database from the first server to the second server.

Back on the first server in our mysql command shell we can check the status again:

Notice now according to the “wsrep_cluster_size” there are 2 nodes in the cluster.

At this point we have a functional cluster up and running.

If the second node fails to start then check the log file at:

/var/lib/mysql/db2.err

The most likely cause is a problem resolving or connecting to the db1-int server for SST.

9. Add extra functions – There are a couple of Percona specific functions that can be added to support monitoring:

10. Add load balancer – The next step is to add the RackSpace Cloud Load Balancer. The load balancer will provide a single IP address for clients to connect to. It will then distribute these connections to the nodes in the cluster.

a. Login to your RackSpace Cloud Control Panel.
b. Go to Hosting section and the Load Balancers tab.
c. Click the “Create Load Balancer” button.
d. In the “Identification” section, enter a name for the load balancer like “lb-db-01” and select the Region. Use the same Region that the cluster nodes are located in.
e. In the “Configuration” section, select “On the Private RackSpace Network” for the “Virtual IP”. Set the “Protocol” to “MySQL” and the port to “3306”. Set the “Algorithm” to “Least Connections”.
f. In the “Add Nodes” section, click the “Add Cloud Servers” button and select each of the servers in the cluster.
g. Click the “Create Load Balancer” button to save the new load balancer.

It may take a couple of minutes for the load balancer to be created. When complete the IP address assigned to the load balancer will be visible. We’ll assume for this article that the address is:

Notice that this is a private, unroutable address on the RackSpace Service Network. This address is not accessible from the public Internet but it is visible to other cloud servers and devices in the same region on the RackSpace Service network. This is the address that your web or application servers will use to connect to the database.

11. Load balancer access controls – To minimize exposure of the database servers we need to add access controls on the load balancer that will limit the range of addresses that are allowed to connect. Generally you’ll only want connections from your web or application servers. In the RackSpace Cloud control panel, drill down to your load balance, find “Access Control” rules at the bottom and add a rule or rules to allow your client servers. Of course if you’re dynamically adding and removing servers then it might not be possible to use these access controls. Or you might need to use the load balancer API to dynamically change access controls.

12. Allow load balancer on firewalls – Next we need to adjust the firewall on each node to allow MySQL connections from the load balancer. The connections from the load balancer to the cluster nodes do not come from the load balancer address. Instead they can come from a range of addresses. The exact range depends on what region the load balancer was created in. At the time of writing this article the ranges were:

For the DFW region, use:
10.183.248.0/22
10.189.254.0/23

For the IAD region, use:
10.189.254.0/23

For the ORD region, use:
10.183.250.0/23
10.183.252.0/23
10.183.254.0/23
10.189.246.0/23

For the LON region, use:
10.189.246.0/23
10.190.254.0/23

For the SYD region, use:
10.189.254.0/23

For the HKG region, use:
10.189.254.0/23

This list may change over time. The latest ranges should be available here:

http://www.rackspace.com/knowledge_center/article/using-cloud-load-balancers-with-rackconnect

For this article we used the DFW region so the iptables rules would be:

13. Create database & users – Now we’re ready to create a database and user for our web application. For any of the cluster nodes, open a MySQL command shell, create the database and add a database user:

Now from your web application should be able to connect to the database cluster with the user that you just create. The database host should be the address of the load balancer.

14. Raise timeout – By default Cloud Load Balancers will timeout any idle connection after 30 seconds. This article shows how to raise the timeout:

https://community.rackspace.com/products/f/25/t/89

15. Add monitoring tools – You’ll probably want to add a few tools to monitor the database. Here’s what we normally install:

myq_gadets is a great set of monitoring utilites:

http://blogs.reliablepenguin.com/wp-admin/post.php?post=818&action=edit&message=6

mysqltuner helps adjust memory allocations based on actual performance:

Percona Toolkit which can be downloaded from here:

http://www.percona.com/software/percona-toolkit

At this point you’re done! You have a working database cluster connected to your web application. Questions and comments are welcomed.

 

0

Install Scalr Command Line Utilities on CentOS 6.4

For managing cloud clusters we love Scalr both as a hosted service and using the open-source self-hosted version. One of the best features is the easy to install and use command line utilities.

Here are the docs:

http://wiki.scalr.com/display/docs/Scalr+Command+Line+Tools

Here’s the install steps for CentOS 6.4:

1. Install python setuptools package:

2. Install python scalr package:

3. Run scalr configure to setup access credentials. See the docs for details but something like this:

And now your ready to go.

 

 

 

 

 

 

0

Find Malware Hidden In Image Files

Hackers will often try to hide malicious code in files with image extensions like “.gif”. Here’s a command line that will help identify suspicious files:

Not every file returned in this scan is malware. Pay special attention to files of type text. It’s not unusual to see an image file where the file extension does not match the content – so a .png file might actually contain a JPEG file.

0