Top Nav

Archive | Uncategorized

Create Clonezilla USB Stick

This article describes the steps to create a bootable Clonezilla USB stick. This stick can be used to boot a computer and create or restore a cloned image.

The USB stick should be a minimum of 1GB in size. It will be totally overwritten and erased so make sure there is nothing of value on the stick.

This procedure assumes you’re working from a Windows PC. Additionally, this procedure uses the Rufus open-source utility to burn the Clonezilla ISO to the USB stick and make it bootable.  There are other utilities that can be used for this procedure for non-Windows environments.

  1. Download and install the Rufus utility from here: https://rufus.ie
  2. Download the alternative stable Clonezilla ISO image. Select the “amd64” CPU architecture and ISO file type from here: https://clonezilla.org/downloads.php
  3. Insert the USB stick.
  4. Start the Rufus utility.
  5. In the Device field, select the USB stick.
  6. Click the SELECT button and choose the Clonezilla ISO.
  7. Click the START button.

The process will take about a minute to complete. When finished remove the USB stick and close the Rufus utility.

0

Getting the CA Certificate Chain

Most SSL providers don’t include a complete CA certificate chain with their SSL certificate and it can be hard to track down the correct chain. There’s a great website for this purpose. Check out https://whatsmychaincert.com/ … just paste in the certificate and download the correct cert chain. I’ve used hundreds of times on many different SSL providers and it’s worked every time.

0

AccessControlAllowOrigin CORS Header With Multiple Domains

The AccessControlAllowOrigin CORS header does not allow multiple domains. This can be a problem in some cases. A workaround on the server is to detect the incoming Origin header, match against an allowed list and then generate the AccessControlAllowOrigin header using the domain. In Apache this can be accomplished with:

 

 

0

Content Updates With W3 Total Cache & RackSpace Cloud Files

We get frequent questions from WordPress clients with W3 Total Cache (W3TC) and RackSpace Cloud Files (RCF) content distribution network (CDN) about how to make changes to the website visible.

First some general notes:

  1. Content changes or updates to pages or posts do not involve the CDN so no special actions are required. W3TC will automatically flush old pages and post revisions from it’s caches.
  2. Media files uploaded through WordPress admin will be automatically uploaded to the CDN and can be immediately used in post or pages with no additional action.
  3. Files that are changed via FTP will required additional action. This included media files, CSS and Javascript as discussed below.

As an example, let’s assume that you’ve modified you themes primary stylesheet at:

wp-content/themes/mytheme/style.css

You made the modification by downloading the file, editing on your workstation and then uploading via SFTP back to the server. In this case W3TC has no way of knowing that you’ve made a change to the file so it does not know that the file needs to be uploaded to the CDN.

First you’ll need to sync or upload the file to the CDN:

  1. Login to WordPress admin.
  2. Go to Performance -> CDN
  3. Click the appropriate upload button in the General section. The button will depend on what type of file you’ve changed. In this example we need to click the “Upload theme files” button. W3TC_CDN_General
  4. A popup window will open when you click the button similar to this: W3TC_CDN_UploadIn the popup window, click the “Start” button. The sync process will start and you’ll be able to see as each file is examined. If the file has not changed since the last sync the it will be marked “Object up-to-date”. If the file is now or has changed then it will be marked with “Ok”. W3TC_CDN_Upload2
  5. After the sync process completes you can close the popup window. If you have many files it may take several minutes to several hours to fully sync.

Now the new or changed file has been uploaded to the CDN. If it’s a new file then it will be immediately available for use in your site.  If instead you are changing a file then we have to take an additional step and “purge” the file from the CDN.

CDN’s function by distribution content to access points close to the user. For large CDN’s there might be dozens or even hundreds of access points. When a user requests a file, the request is routed to the closest access point. The CDN infrequently check back with your web server for changed or updated files. The frequency of these checks is call the “Time to Live” or TTL. By default RCF sets a 24 hour TTL on all files. So after uploading a change, it may take up to 24 hours before the change if visible on on access points. This can be a problem for our CSS change as users on different access point might get different versions of the file.

W3TC includes a “purge” feature which you can use as follows:

  1. Login to WordPress admin
  2. Go to Performance -> CDN
  3. Look for the Purge button above the General section. Click the Purge button to open the CDN Purge Tool popup window. W3TC_CDN_Purge1
  4. Enter the relative paths of each file that you want to purge into the “Files to purge” field.
  5. Click the Purge button and wait for the process to complete.W3TC_CDN_Purge2
  6. Close the CDN Purge Tool window.

Purging on the CDN is not an instantaneous process. I may take up to about a half hour for all access point to receive and honor the purge request but this is still much faster then the 24 hour TTL.

We have had mixed success with the W3TC CDN Purge Tool. If I need a change purged as fast as possible then I prefer to run the purge from the RackSpace Cloud Files control panel as follows:

  1. Login to https://mycloud.rackspace.com
  2. Go to Storage -> Files
  3. Drill down on your container and then to the file that you want to purge.
  4. Click on the Gear icon next to the file to open the drop down menu. RSCF_Purge1
  5. Select “Refresh File (Purge)” from the menu. A small confirmation dialog will open. RSCF_Purge2
  6. Click “Purge File” button in the dialog to complete the process.

Again purging on the CDN is not an instantaneous process. I may take up to about a half hour for all access point to receive and honor the purge request.

Is there anyway to make updates to the CDN faster?

Short answer is “no”. The TTL and propagation delay is the nature of the CDN. Generally the benefits of distributing content win out over the update delays.

But two notes:

  1. New files do not experience any propagation delay. If you want to change a file and have it propagate instantly then change the name of the file so it is seen as a new file by the CDN.
  2. You can lower the TTL of the CDN container. If you have a big setup of updates planned, then it might be a good idea to lower the TTL a few days in advance so that you minimize propagation delay.

 

 

0

GlusterFS Cluster with CentOS on RackSpace Cloud

GlusterFS Storage Cluster - New PageWe want to build a high-availability storage cluster based on GlusterFS using RackSpace Cloud Servers on CentOS. Also we want to use RackSpace Cloud Networks to provide private networks for replication and cluster access. Finally we want to use RackSpace Cloud Block Storage with LVM and XFS to simplify future expansion.

This article is based on a previous article we published on the same topic for Ubuntu:

GlusterFS Cluster With Ubuntu On RackSpace Cloud

As shown in the diagram, we’ll have two Cloud Networks, one for replication between the servers and one for access by clients to the storage.

CBS-LVM-XFS-GFSThe next diagrams shows the filesystem layers – local, CBS, LVM, XFS and finally GlusterFS.

Part 1 – Server Setup

1. Create servers and networks – Login to your RackSpace Cloud account and create two server following these guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “CentOS 6.4”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “StorageInternal”. Then create a second network named “StorageAccess” When creating additional servers, make sure that you select the “StorageInternal” and “StorageAccess” networks.
  • In the Advanced Options section under Disk Partition choose the “Manual” option so that we can partition and format the hard drive to our preferences.

Here are the addresses assigned to the servers that we created but yours my be different:

and:

2. Open SSH shell to servers – Open an SSH shell to each of the servers. Unless noted otherwise the following steps should be repeated on both servers.

3. Upgrade packages – Update repository listings and then upgrade all packages to latest versions:

4. Setup /etc/hosts – Add hostnames on the “StorageInternal” network for each server.

5. Partition primary storage – Since we chose the “Manual” disk partition option, a 20GB root partition was created and the remainder of the primary storage was left unpartitioned. We need to partition this storage in preparation for later use in our LVM array.

Here’s our fdisk session

Since we changed the partition table on the root device a reboot is required to make the new partition visible. Continue once the reboot has been completed.

6. Add Cloud Block Storage – For this example, we need about 250GB of usable storage. There partition we just created has 20GB so we need to add about 225GB more using Cloud Block Storage.

a. Login to the cloud management console and drill down to the first server. Scroll down to the bottom of the page and click “Create Volume” in the “Storage Volumes” section.

b. Complete the Create Volume form. I like to include the hostname in the volume name along with an iterator so I used “fileserver1-1”. Set Type to “Standard” unless you need high performance and can afford the premium SSD dcrive. Enter the size which is 230GB for this example. Click “Create Volume” button to complete.

c. Click the “Attach Volume” button, select the volume and click the “Attach Volume” button.

d. Wait a few minutes for the storage to attach to the server.

From a terminal on the server use dmesg to see latest kernel messages:

At the bottom of the output you should see something like this:

So you can see that the new volume has been attached to the “xvdb” device. Your device name may be different.

e. Now partition the new storage as one large block. Here’s our fdisk session:

Repeat this process for the second server.

7. Setup LVM Partition – We now have 2 partitions:

/dev/xvda2 – 20GB
/dev/xvdb1 – 230GB

We’re going to use LVM to combine these two physical partitions into a single logical partition.

If you’re not familier with LVM then here’s a great tutorial:

A Beginner’s Guide To LVM

Here are the steps:

a. Install LVM and XFS:

b. Prepare the physical volume with pvcreate:

c. Add physical volumes to a “volume group” named “vg1”:

d. Create logical volume “gfs1” inside “vg1”:

e. Format the logical partition with XFS:

f. Create a mount point for the partition:

g. Add to /etc/fstab

h. Mount the new partition:

Repeat this step on both servers.

8. Install glusterfs yum repository – Run the following command to add the gluster repository to yum:

9. Install glusterfs software – Use the following command to install the cluster software:

10. Confirm gluster version – Make sure you have the correct version:

10. Firewall configuration – Next we need to get a basic firewall configured to protect the servers. CentOS uses bare IPtables so we just need to run the following:

This rule allows all traffic on eth4 which is the “StorageInternal” network.

Next we need to allow connections from clients. We’re going to restrict access based on IP address. You’ll need to repeat these commands for each different client. Let’s assume that we have a client with an IP address of x.x.x.x then add the following rules:

Now commit the changes with:

11. Start glusterd – Set glusterd to start on boot and start it for the first time:

11. Add nodes to cluster – On fileserver1, run this command to register fileserver2 to the trusted storage pool:

The results should look like this:

12. Check cluster status – On each node run “gluster peer status” and confirm the peer addresses. Here’s fileserver1:

and here’s fileserver2:

13. Create volume – Next we need to create the actual GlusterFS volume:

You should get back a response like:

14. Start the volume – Now start the volume to make it available to clients with:

You should get a response like:

The howto by Falko Timme mentioned at the start of this article gives some useful advice and tips on how to troubleshoot if the volume fails to create or start.

15. Verify volume status – Check the status of the volume with:

You should get a response similar to:

16. Allow client access – We want to allow all clients on the “StorageAccess” network access to the cluster:

Now show volume info again:

And you should get:

Setup on the server is complete. Now it’s time to add clients.

Part 2 – Client Setup

We’ll assume that the first client is also going to be CentOS.

1. Install glusterfs yum repository – Run the following command to add the gluster repository to yum:

2. Install glusterfs client – Execute the following command

3. Confirm version – Verify that you have the correct version installed:

4. Create mount point – Create a directory that will be the mount point for the gluster partition:

5. Setup /etc/hosts – Add the following entries to /etc/hosts:

Notice that we’re mapping both the StorageInternal and StorageAccess network names to the same IP addreses on the StorageAccess network.

6. Mount volume – Execute the command to mount the gluster volume:

7. Edit /etc/fstab – Add this line to /etc/fstab to make the mount start automatically on boot:

And that completes the client setup procedure. You now have a working GlusterFS storage cluster and a connected client.

Comments and suggestions are welcomed.

0