Top Nav

Archive | Linux

Dirty Cow Vulnerability (CVE-2016-5195)

On October 19, 2016, a privilege escalation vulnerability in the Linux kernel was disclosed. The bug is nicknamed Dirty COW because the underlying issue was a race condition in the way kernel handles copy-on-write (COW). Dirty COW has existed for a long time — at least since 2007, with kernel version 2.6.22 — so the vast majority of servers are at risk.

Exploiting this bug means that a regular, unprivileged user on your server can gain write access to any file they can read, and can therefore increase their privileges on the system. More information can be found on CVE-2016-5195 from Canonical, Red Hat, and Debian.

Fortunately, most major distributions have already released a fix. You can follow this tutorial to see if your server is vulnerable and to apply updates as needed.

Check Vulnerability


To find out if your server is affected, check your kernel version.

  • uname -rv

You’ll see output like this:


If your version is earlier than the following, you are affected:

  • 4.8.0-26.28 for Ubuntu 16.10
  • 4.4.0-45.66 for Ubuntu 16.04 LTS
  • 3.13.0-100.147 for Ubuntu 14.04 LTS
  • 3.2.0-113.155 for Ubuntu 12.04 LTS
  • 3.16.36-1+deb8u2 for Debian 8
  • 3.2.82-1 for Debian 7
  • 4.7.8-1 for Debian unstable


Some versions of CentOS can use this script provided by RedHat for RHEL to test your server’s vulnerability. To try it, first download the script.

  • wget

Then run it with bash.

  • bash

If you’re vulnerable, you’ll see output like this:


Fix Vulnerability

Fortunately, applying the fix is straightforward: update your system and reboot your server.

On Ubuntu and Debian, upgrade your packages using apt-get.

  • sudo apt-get update && sudo apt-get dist-upgrade

You can update all of your packages on CentOS 6 and 7 with sudo yum update, but if you only want to update the kernel to address this bug, run:

  • sudo yum update kernel

Right now, we’re still waiting on a fix for CentOS 5. In the interim, you can use this workaround from the Red Hat bug tracker.

Finally, on all distributions, you’ll need to reboot your server to apply the changes.

  • sudo reboot


Make sure to update your Linux servers to stay protected from this privilege escalation bug.


Slow DNS Lookups For Web Requests

Ran into a strange problem recently … web server behind a firewall was able to resolve names with “dig” sucessfully but attempts to fetch web pages with “wget” or “curl” was very slow … seemed to hang on name resolution.  So this would work fine:

but this would hang for several seconds:

This problem extended to curl requests from with in PHP … in this case a Magento website … various plugins in the site were “calling home” when loading admin pages which resulted in making the admin painfully slow.

After much debugging we concluded that the problem was due to the fact that certain versions of glibc run IPv4 and IPv6 requests in parallel which breaks some firewalls and/or DNS servers. The work around was to add this option in /etc/resolv.conf:

This forces the requests to be made sequentially instead of in parallel. Hope this helps other struggling with these weird symptoms.


PayPal Certificate Upgrade

PayPal is upgrading the certificate for to SHA-256. This endpoint is also used by merchants using the Instant Payment Notification (IPN) product.

A detailed FAQ has been provided here:

Merchant Security System Upgrade Guide

Also the microsite:

2015-2016 SSL Certificate Change

For most online store owners the primary concern is that your server has the correct new G5 certificate installed. Instructions for testing can be found here:

Cert Check For Linux

If you’re running CentOS or RedHat then the command is:




MariaDB on CentOS 7 – “Error in accept: Too many open files”

By default is seems the soft and hard open files limits on MariaDB in CentOS 7 are 1024 and 4096 respectfully. You can see these limits by first getting the process ID:

And then looking at the limits in the proc filesystem:

You’ll see something like this:

Notice the numbers for “Max open files”.

If you run into problems with MariaDB failing and you see errors like this in the log:

Then you need to increase the open files limits by editing:

and adding this line:

to the “[Service]” section. Then reload the systemctl daemon:

and restart the MariaDB service:

Now the limit will be increased.  For example:

UPDATE: We’ve seen similar problems with nginx. The solution is similar … increase the limits for the nginx service.

UPDATE: As noted by Bastiaan Welmers in the comments, it better to copy the service control file then to edit:







XFS Requires “inode64” On Large Filesystems

Recently ran into a problem with a large distributed GlusterFS filesystem. All of a sudden we started getting errors about no free space on device when trying to write files. The individual bricks on the distributed GlusterFS filesystem were hosted on XFS formatted partitions. After some investigation we found that when the size of the brick’s XFS filesystem exceeded 16TB the “no free space” errors started.

The solution was to add “inode64” to the mount options for the XFS partition. The XFS FAQ states the following:

By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like “disk full” when you still have plenty space free, but there’s no more place in the first TB to create a new inode. Also, performance sucks.
To come around this, use the inode64 mount options for filesystems >1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.

After adding “inode64” to /etc/fstab, we mounted and unmounted the filesystem and restarted Glusterd. Now the distributed partition is working correctly.