Top Nav

Archive | Uncategorized

Plesk – Howto Clear Hung Database Copy

Recent versions of Plesk include a “Make a Copy” feature for databases. Unfortunately, on occasion the copy will fail but the copy task will not be removed. Then you’ll see a message like this every time you go to the affected database in Plesk.

You can view a list of the current tasks with the following command from a root ssh login:

You can clear the task by running the following command:

0

Passive Mode FTP with iptables

There’s lots of advice on the net about how to setup a server with iptables to allow passive mode FTP. Below is the approach that we’ve found to be most effective.

Start by configuring your FTP daemon to use a fixed range of ports. We use 41361 to 65534 which is the IANA registered ephemeral port range. The exact config depends on what FTP software you’re using:

vsftpd

Edit /etc/vsftpd/vsftpd.conf and add the following lines:

proftpd

Edit /etc/proftpd.conf and add to the Global section:

Now restart your FTP service so the changes take effect.

Next you’ll need to configure the ip_conntrack_ftp iptables module to load. On Redhat/CentOS just edit /etc/sysconfig/iptables-config and add “ip_conntrack_ftp” to the IPTABLES_MODULES like this:

Next edit /etc/sysconfig/iptables and add a rule to allow TCP port 21. The new line is marked in red:

Now restart the iptables service:

You can verify that the correct port range has been registered with lsmod like this:

and you’ll get something like this:

And that’s all it takes to get passive mode ftp working behind iptables.

One extra note: If your server is NATed behind a physical firewall then you’ll probable need to load the “ip_nat_ftp” iptables module.

On a AWS EC2 server with vsftpd I had to add “pasv_address=x.x.x.x” to the /etc/vsftpd/vsftpd.conf file where x.x.x.x was the public (elastic) address of the server. On an AWS EC2 server with Plesk and proftpd I had to add “MasqueradeAddress x.x.x.x” to a new file at /etc/proftpd.d/1-pasv_addr.conf.

13

NFS Server On RackSpace Cloud

Today we’re going to setup an NFS server and client on a pair of RackSpace Cloud Servers.

To get started create two cloud servers – one will be the NFS server and the other will be the NFS client. We’ll be using CentOS (RHEL) for our project but you can use the distribution of your choice.

We’re going to do the NFS share over the Private Net interface which is a 10.x.x.x unroutable network. This way NFS is not exposed to the public network and the security issues are simplified.

Our server will be named “file01” and our client will be “client01”. We will share the /data directory tree on the server and mount it to /data on the client.

Server

  1. Install necessary packages. You’ll need nfs-utils and portmap. Additional dependent packages will be installed automatcially:

    Note: On RHEL6, “portmap” is now “rpcbind”.

  2. Setup /etc/hosts. On the server, I like to add host entries to /etc/hosts for each client so that it’s easy to reference them in the configuration files. So we’ll add this line to /etc/hosts:

    Remember to use the private address of the client. We want the NFS traffic to use the non-public network for improved security.
  3. Setup /etc/exports. This text file defines what paths will be shared and to whom. Here is a simple case sharing /data to a single client:

    There are a number of different configuration options available for the /etc/exports file. See the man page for full details.
  4. Setup /etc/hosts.allow. This text file defines access controls for the NFS related services. Here’s an example configuration:
  5. Setup iptables. Edit the /etc/sysconfig/iptables file and add the following line to the RH-Firewall-1-INPUT table right before the final COMMIT:

    Notice that the IP address is the private interface on the client. You’ll need to add additional lines if you have more then one client. This is a very simple rule that allows all traffic from the client to the server. Now restart iptables:
  6. Configure startup. Use the following commands to configure the nfs and portmap services to start on boot.
  7. Start services. Start or restart the portmap and nfs services. You’ll need to do this anytime that you change the NFS configuration.

Client

  1. Install necessary packages. You’ll need nfs-utils and portmap. Additional dependent packages will be installed automatcially:
  2. Setup /etc/hosts. On the server, I like to add host entries to /etc/hosts for the server so that it’s easy to reference them in the configuration files. So we’ll add this line to /etc/hosts:

    Remember to use the private address of the client. We want the NFS traffic to use the non-public network for improved security.
  3. Configure startup. Use the following commands to configure the portmap service to start on boot.
  4. Start portmap service. Start or restart the portmap.
  5. Setup /etc/fstab. The /etc/fstab text file contains a list of filesystems that should be mounted on the client server. We need to add new line at the bottom of the file for our NFS server. Here’s an example:

    Notice that we’ve stated that the /data share on file01-private should be mounted to /data on the client server.
  6. Create the mount point. The mount point is just an empty directory to which the remote filesystem will be connected.
  7. Mount the share. Next step is to mount the share with this command:

And now you should have a working NFS mount of file01:/data to client01:/data.

If it’s not working, here’s a good page of advice on how to troubleshoot:

http://tldp.org/HOWTO/NFS-HOWTO/troubleshooting.html

0

Syncing Content Between RackSpace Cloud Servers With Unison

We have two RackSpace Cloud Server running CentOS that need to have the web content kept in sync. Changes on either server need to be replicated to the other server. Easy way to get this accomplished is with the Unison File Syncronizer (http://www.cis.upenn.edu/~bcpierce/unison/). Here are the steps:

1. Install SSH keys for root on each server so SSH can run without passwords between the servers.

2. Add /etc/hosts file entries to map the private addresses of the servers to names.

3. Install Unison on each server:

4. Create a profile on each server at /root/.unison/sync_web.prf containing:

Notice the hostname “web2-priv” in red above. The file on web1 should reference web2-priv. The file on web2 should reference web1-priv.

You’ll need to adjust the path listed in the file to match your environment.

5. Next create a script to run Unison with the profile created in the previous step. Name the script /root/sync_web.sh and make it executable.

6. Add an entry to /etc/cronjob to run Unison once per minute:

Everything should now be ready to go. You can test by running the sync_web.sh script manually on each server. If there are no errors then try adding, changing and removing files on both servers and verify that the changes are synced within one minute.

3

MySQL Load Balancing With HAProxy

The goal of this series of articles has been to construct a high availability and load balanced MySQL cluster with CentOS on the RackSpace Cloud.

You should begin with this article to setup the HA Linux cluster:

HA Linux Cluster On RackSpace Cloud Servers

Then follow this article to add the multi-master MySQL replication:

MySQL Multi Master HA Cluster

The last part of the process is to add load balancing.

1. Start by installing HAProxy on both servers:

Unfortunately yum will provide an outdated version so we need to upgrade from source as follows:

Now edit /etc/init.d/haproxy and change this line:

to:

2. Now edit /etc/haproxy/haproxy.cfg

a. Remove all existing “listen”, “frontend” and “backend” sections.

b. Find this line if it exists:

and change it to:

c. Add this listen section:

The [PASSWORD] controls access to the HAProxy web interface. And [HOST1_IP] and [HOST2_IP] are the private addresses of the servers.

d. Repeat on the other server.

3. Set haproxy service to start on boot:

4. Finally start the haproxy service.

/sbin/service haproxy restart

3