Top Nav

Archive | Cloud Servers

Scalr 4.5 Install Notes For Cento 6.4

Last year we published an article on installing the open-source version of Scalr:

http://blogs.reliablepenguin.com/2013/08/29/scalr-install-notes

Now there’s a new 4.5 release of Scalr available so it’s time for an update.

We’re installing on Cento 6.4 hosted on a RackSpace Performance 1 Cloud Server.

The installation instructions have improved since the last time around but Scalr is till a complex install:

https://scalr-wiki.atlassian.net/wiki/display/docs/Installing+Scalr+4.5

Follow these instructions one section at a time and make sure you’ve got the section complete and working before moving to the next section. We’ve provided notes below about each section.

Before you start, select a hostname for the server and add a DNS A record. For this article we’ll use:

You’ll have problems later on if you don’t get the hostname setup in advance.

Also I like to add some swap to the server:

 Create UNIX Users and Group for Scalr

I setup a script at /root/env.sh with the environment variable as follows:

This makes it easy to get the environment right.

Notice that we’re using user “apache” instead of “www-data” since this is CentOS instead of Ubuntu.

Configure your firewall

Edit /etc/sysconfig/iptables and add lines shown:

And restart the service:

Install Scalr’s PHP Dependencies

The PHP dependencies are tricky. Tried using the IUS repository but kept having problems with errors like this:

So I started over with the Remi repository:

Now edit /etc/yum.repos.d/remi.repo and enable the “remi” and “remi-php55” repositories.

The instructions say to install the php-pecl-rrd extension now but it’s better to wait due to dependency issues that we’ll see later.

Also notice that we’re installing php-pecl-http1 instead of php-pecl-http. This is critical.

Install Scalr’s Python Dependencies

Python is installed by default so just a couple of additional packages to install:

As with PHP, we’ll defer installing the python-rrdtool package until a later step.

Configure PHP for Scalr

Edit /etc/php.ini and (a) enable “short_open_tags” and (b) set the “date.timezone” setting.

Update System SNMP MIBs

For CentOS there is no “snmp-mibs-downloader”. I ended up doing nothing for this step.

Download and Install Scalr 4.5

I choose to install Scalr in /opt/scalr with the following steps:

Notice that back in the first step, I set the SCALR_INSTALL environment variable. No run the installer:

Install and Configure MySQL

Install MySQL server and set to start on boot:

Set the mysql root login and secure:

http://blogs.reliablepenguin.com/2012/10/09/secure-mysql-installation

I like to add a .my.cnf file:

http://blogs.reliablepenguin.com/2012/10/09/create-my-cnf-file-for-mysql-authentication

Edit the /etc/my.cnf file and add this line to the “[mysqld]” section:

Now create the database and user for Scalr:

And load the database structure and data:

Create the Scalr Cache folder

Install and Configure rrdtool and rrdcached

The default version of rrdtool is too old …. we need at least 1.4:

And now we can get the PHP and Python dependencies that we skipped earlier:

Set rrdcached to start on boot:

Edit /etc/sysconfig/rrdcached and change the “RRDCACHED_USER” to “root” and add the following line:

Now create the graphics and data directories:

And start the service:

Install and Configure Apache

Install Apache “httpd” package and SSL support:

Edit /etc/httpd/conf.d/vhosts.conf and add:

Set Apache to start on boot and start it now:

Configure Scalr

Copy the sample config file:

Now edit the config file at /opt/scalr/app/etc/config.yml and set the following parameters:

Leave the other parameters at default settings.

Configure the Scalr Cronjobs

Edit “apache” cronjobs:

and add the following:

 Configure the Scalr Daemons

Edit /etc/init.d/scalr and copy/paste the following contents:

Now set the service to start on boot and start it for the first time:

Validate your Scalr installation

 Now run the validation script:

Log in to Scalr

Open a browser and go to:

http://scalr.domain.com

Login with user “admin” and password “admin”.

Go to the Admin -> admin -> edit and change the admin password.

All Done!

Scalr install is now complete. You can get started using Scalr by adding a user and building an environment.

 

1

Scale Out For SharkTank

Recently one of our clients was featured on SharkTank, the critically-acclaimed business-themed show, featuring the Sharks and their continuing the search to invest in the best businesses and products that America has to offer.

Fohawx is a line of cool accessories that can easily attach to any kind of safety helmet – instantly transforming annoying headgear into a fashion statement. The appearance on SharkTank was a great opportunity for Fohawx to showcase their product.

Based on past experience we know that their web site could get 5000 or more simultaneous users. The small RackSpace Cloud Server on which they were hosted would be vaporized by this traffic. Due to the proprietary nature of their application server (ColdFusion) we could not add additional web servers. So a couple of days before the event we outlined measures to temporarily scale out their hosting capacity:

1. Re-sized the web server to 16GB of RAM. We wanted to go to 30GB but there was not sufficient space available on the “huddle” so we could only go to 16GB. We could move to a diffierent huddle but this would require and IP address change.

2. Added a separate database server. We spun up a 60GB server in RackSpace’s new Performance flavor and moved the application database from the web server to this new database server. The goal was to offload the web server and allow for faster database operations.

3. Added a pair of Varnish cache servers in front of the web server. The Varnish servers were built on small RackSpace Cloud Servers. As a caching proxy, Varnish can be used to offload static content from the web server. We used two Varnish servers to provide redundancy and increase throughput.

4. Added a RackSpace Cloud Load Balancer in front of the Varnish servers. The Cloud Load Balancer was configured to distribute traffic evenly between the Varnish servers.

In total this configuration cost about $3 per hour to operate and it ran for 4 days so the total cost was less then $220 hour.

The Sharks were not big fans of Fohawx but viewers seemed to disagree. In the hour after the initial airing the site served more the 1 million hits with a peak of over 4100 simultaneous users. Amazingly the Varnish cache servers handled 99% of all request and only passed 1% through to the web server.

This case study demonstrates how Reliable Penguin can combine RackSpace Cloud services with open-source components to rapidly meet emerging hosting challenges.

Watch the SharkTank episode here:

http://watchabc.go.com/shark-tank/SH559076/VDKA0_pbz8umsy/week-11

Fohawx starts around 00:23.

And don’t forget to get your Fohawx at:

http://fohawx.com

0

Percona XtraDB Cluster On CentOS

Percona XtraDB Cluster - New Page

This article is built off of a similar article we published last month:

Percona XtraDB Cluster on Ubuntu

The primary difference is this time we’re going to use CentOS instead of Ubuntu.

In this article we’re going to build a Percona XtraDB Cluster using a pair of RackSpace Cloud Servers. Percona XtraDB Cluster is a MySQL compatible replacement supporting multi-master replication. For this project we’ll use the latest CentOS release and we’re going to use RackSpace Cloud Networks to setup an isolated segment for the replication between the cluster nodes. We’ll do two nodes in the cluster but you can add additional nodes as desired. Finally we’ll use a RackSpace Cloud Load Balancer to distribute traffic between the nodes.

To get started, create the cloud servers from the RackSpace control panel using the following guidelines:

  • Make sure you choose a Next Gen region for the servers.
  • For the Image choose “CentOS 6.4”
  • Select a size based on your requirements. We’re using 1GB for this article.
  • In the Networks section, when creating the first server, click the “Create Network” button and add a network named “DatabaseInternal”. When creating additional servers, make sure that you select the “DatabaseInternal” network.
  • If you need more storage then is included in the base image or if you want higher performance the consider using RackSpace Cloud Block Storage. This is not covered in this article.

For this article we’ll assume that the “DatabaseInternal” cloud network is:

and our servers are:

and:

Here are the basic instructions from Percona that we’ll be following:

http://www.percona.com/doc/percona-xtradb-cluster/installation.html

To get started open SSH terminal sessions to each server and complete the following steps on each server unless noted otherwise:

 

1. Disable SELinux – the Percona docs state that the cluster will not work with SELinux and it must be disabled. CentOS on RackSpace Cloud install with SELinux disabled by default. You can confirm this with:

The response should be:

If SELinux is enabled then follow these instructions to disable:

http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-enable-disable.html

Now reboot the server to get a clean system without SELinux.

2. Configure firewall – Next we need to get a basic firewall configured to protect the servers. CentOS uses iptables with no frontend so we’ll edit:

/etc/sysconfig/iptables

Add the two lines highlighted in bold:

The first new rule for eth1 allows all traffic between the database servers on the “DatabaseInternal” network. The second rule allows connections to TCP 3306 (mysql) on the eth0 network which is the public interface. You might want to change this rule and instead limit access to specific IP addresses like this:

Replace x.x.x.x with the IP address of the client (web) server.

Now restart the service to apply changes:

3. /etc/hosts – Let’s add some entries to the /etc/hosts file:

4. Add Percona yum Repository – Just follow the instructions here:

http://www.percona.com/doc/percona-xtradb-cluster/installation/yum_repo.html

Basically just execute:

5. Install packages – Run the following command to install the cluster packages:

6. /etc/my.cnf – Setup configuration files on each server. The Percona distribution does not include a my.cnf file to you need to role your own. The minimal configuration would be something like this:

Most of this is straight from the Percona documentation. Key lines are:

This tells Percona what address to advertise to other nodes in the cluster. We need this set to the server “DatabaseInternal” interface. Without this setting SST will fail when it uses the wrong interface and is blocked by the firewall.

This line identifies at least one other member in the cluster. Notice that we’re using the names we set in the /etc/hosts file.

The above example for my.cnf is very minimal. It does not address any database memory or performance tuning issues so you’ll likely want to expand upon the example.

7. Bootstrap Cluster – The cluster needs to be bootstrapped on the first server when it’s started for the first time. This can be accomplished with:

The subject of bootstrapping is covered in more detail here:

http://www.percona.com/doc/percona-xtradb-cluster/manual/bootstrap.html

The first node startup will look something like this:

8. Secure mysql installation

9. /root/.my.cnf – Add a .my.cnf for MySQL root authentication as described here:

http://blogs.reliablepenguin.com/2012/10/09/create-my-cnf-file-for-mysql-authentication

This step is optional. The .my.cnf is convenient but should not be used in high security environments.

10. Start additional nodes

Now with the first node started, open a mysql command shell and view the wsrep_cluster_% status variables:

Notice that the wsrep_cluster_size is 1 and the wsrep_cluster_status is “Primary”. This is normal for the first node in a newly bootstrapped cluster.

Now we’ll start Percona normally on each additional server. So go to db2 and run:

This time the start should look something like this:

Startup for the second node adds the SST or “State Snapshot Transfer”. In this step the servers will use rsync and SSH keys to transfer a copy of the the database from the first server to the second server.

Back on the first server in our mysql command shell we can check the status again:

Notice now according to the “wsrep_cluster_size” there are 2 nodes in the cluster.

At this point we have a functional cluster up and running.

If the second node fails to start then check the log file at:

/var/lib/mysql/db2.err

The most likely cause is a problem resolving or connecting to the db1-int server for SST.

9. Add extra functions – There are a couple of Percona specific functions that can be added to support monitoring:

10. Add load balancer – The next step is to add the RackSpace Cloud Load Balancer. The load balancer will provide a single IP address for clients to connect to. It will then distribute these connections to the nodes in the cluster.

a. Login to your RackSpace Cloud Control Panel.
b. Go to Hosting section and the Load Balancers tab.
c. Click the “Create Load Balancer” button.
d. In the “Identification” section, enter a name for the load balancer like “lb-db-01” and select the Region. Use the same Region that the cluster nodes are located in.
e. In the “Configuration” section, select “On the Private RackSpace Network” for the “Virtual IP”. Set the “Protocol” to “MySQL” and the port to “3306”. Set the “Algorithm” to “Least Connections”.
f. In the “Add Nodes” section, click the “Add Cloud Servers” button and select each of the servers in the cluster.
g. Click the “Create Load Balancer” button to save the new load balancer.

It may take a couple of minutes for the load balancer to be created. When complete the IP address assigned to the load balancer will be visible. We’ll assume for this article that the address is:

Notice that this is a private, unroutable address on the RackSpace Service Network. This address is not accessible from the public Internet but it is visible to other cloud servers and devices in the same region on the RackSpace Service network. This is the address that your web or application servers will use to connect to the database.

11. Load balancer access controls – To minimize exposure of the database servers we need to add access controls on the load balancer that will limit the range of addresses that are allowed to connect. Generally you’ll only want connections from your web or application servers. In the RackSpace Cloud control panel, drill down to your load balance, find “Access Control” rules at the bottom and add a rule or rules to allow your client servers. Of course if you’re dynamically adding and removing servers then it might not be possible to use these access controls. Or you might need to use the load balancer API to dynamically change access controls.

12. Allow load balancer on firewalls – Next we need to adjust the firewall on each node to allow MySQL connections from the load balancer. The connections from the load balancer to the cluster nodes do not come from the load balancer address. Instead they can come from a range of addresses. The exact range depends on what region the load balancer was created in. At the time of writing this article the ranges were:

For the DFW region, use:
10.183.248.0/22
10.189.254.0/23

For the IAD region, use:
10.189.254.0/23

For the ORD region, use:
10.183.250.0/23
10.183.252.0/23
10.183.254.0/23
10.189.246.0/23

For the LON region, use:
10.189.246.0/23
10.190.254.0/23

For the SYD region, use:
10.189.254.0/23

For the HKG region, use:
10.189.254.0/23

This list may change over time. The latest ranges should be available here:

http://www.rackspace.com/knowledge_center/article/using-cloud-load-balancers-with-rackconnect

For this article we used the DFW region so the iptables rules would be:

13. Create database & users – Now we’re ready to create a database and user for our web application. For any of the cluster nodes, open a MySQL command shell, create the database and add a database user:

Now from your web application should be able to connect to the database cluster with the user that you just create. The database host should be the address of the load balancer.

14. Raise timeout – By default Cloud Load Balancers will timeout any idle connection after 30 seconds. This article shows how to raise the timeout:

https://community.rackspace.com/products/f/25/t/89

15. Add monitoring tools – You’ll probably want to add a few tools to monitor the database. Here’s what we normally install:

myq_gadets is a great set of monitoring utilites:

http://blogs.reliablepenguin.com/wp-admin/post.php?post=818&action=edit&message=6

mysqltuner helps adjust memory allocations based on actual performance:

Percona Toolkit which can be downloaded from here:

http://www.percona.com/software/percona-toolkit

At this point you’re done! You have a working database cluster connected to your web application. Questions and comments are welcomed.

 

0

RackSpace Cloud Networks Bug and Interesting Notes

There’s a bug in the automation for provisioning RackSpace Cloud Servers with Cloud Networks from the control panel. If you setup a server with more then one Cloud Network, the networks get assigned to the wrong interfaces. This is not a problem if all the servers you’re connecting to the Cloud Network are created from the control panel because they will all be consistently wrong. But if some of your server are created from the API then you’ll have a problem because the API automation assigns the networks correctly.

The fix, until RackSpace corrects the bug, is to manually correct the address assignments on the control panel generated cloud servers.

While troubleshooting this issue, RackSpace support pointed out a couple of interesting commands:

To see what your networking should be, you can run this:

And to view a specific interface run:


This will show the config that should be assigned to a particular MAC address.

We love Cloud Networks but hate bugs in automation as we wasted an hour tracking this down!
0