Top Nav

Archive | Linux

MariaDB on CentOS 7 – “Error in accept: Too many open files”

By default is seems the soft and hard open files limits on MariaDB in CentOS 7 are 1024 and 4096 respectfully. You can see these limits by first getting the process ID:

And then looking at the limits in the proc filesystem:

You’ll see something like this:

Notice the numbers for “Max open files”.

If you run into problems with MariaDB failing and you see errors like this in the log:

Then you need to increase the open files limits by editing:

and adding this line:

to the “[Service]” section. Then reload the systemctl daemon:

and restart the MariaDB service:

Now the limit will be increased.  For example:

UPDATE: We’ve seen similar problems with nginx. The solution is similar … increase the limits for the nginx service.

UPDATE: As noted by Bastiaan Welmers in the comments, it better to copy the service control file then to edit:

UPDATE: 

As describe here:

https://docs.fedoraproject.org/en-US/quick-docs/systemd-understanding-and-administering/#_modifying_existing_systemd_services

Create an override file with:

or:

Put the modified settings in the override file:

Reload systemd config:

And restart mariadb:

UPDATE:

On server with Plesk, view the current open files limit with:

 

 

 

 

 

 

4

XFS Requires “inode64” On Large Filesystems

Recently ran into a problem with a large distributed GlusterFS filesystem. All of a sudden we started getting errors about no free space on device when trying to write files. The individual bricks on the distributed GlusterFS filesystem were hosted on XFS formatted partitions. After some investigation we found that when the size of the brick’s XFS filesystem exceeded 16TB the “no free space” errors started.

The solution was to add “inode64” to the mount options for the XFS partition. The XFS FAQ states the following:

By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like “disk full” when you still have plenty space free, but there’s no more place in the first TB to create a new inode. Also, performance sucks.
To come around this, use the inode64 mount options for filesystems >1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.

After adding “inode64” to /etc/fstab, we mounted and unmounted the filesystem and restarted Glusterd. Now the distributed partition is working correctly.

0

How To Clear btmp File

Running low on storage. Check the /etc/btmp file where failed login attempts are logged. This file may be very large:

The file can be cleared like this:

Credit for this goes to Tournas Dimitrios who has a great article on btmp:

https://tournasdimitrios1.wordpress.com/2010/12/28/how-to-clear-and-delete-last-logged-in-users-and-bad-login-attemps-log-wtmp-and-btmp/

2

Using SSHFS To Mount Remote Filesystem

If you need to do a lot of file operations against a remote ssh/sftp server then sshfs might be the perfect tool. sshfs is a FUSE filesystem that you can mount onto a local mount point. Once mounted you can manipulate the files as if there were local.

Here’s the project site:

http://fuse.sourceforge.net/sshfs.html

You might need to install with yum or apt-get:

After sshfs installed you can use it like this:

You’ll be prompted for a password unless you has keys setup with the remote server.

Now you can copy files to and from under mymount/.

0

Chrooted SFTP Users

Here’s the steps to create chrooted SFTP users.

1. Comment out the following line in /etc/ssh/sshd_config

2. Append the following in /etc/ssh/sshd_config

where USERNAME is the user and ChrootDirectory is the path that the user will be locked into. Add a new “Match User” stanza for each user that needs to be chrooted.  This allows each user to have a unique directory.

3. Restart SSH

4. Create the SFTP user group

5. Modify the user

SCP and SSH are not allowed with this setup but you could change the shell to allow them…

6. The highest directory in the chroot tree must be owned by user/group root

0