Quantcast
Channel: thinking sysadmin » virtualization
Viewing all articles
Browse latest Browse all 2

Keeping your RHEL VMs from crushing your storage at 4:02am

$
0
0

Running a lot of Red Hat VMs in your virtual infrastructure, on shared storage? CentOS, Scientific Linux, both versions 4 and 5, they count for these purposes; Fedora should likely be included too. Do you have the slocate (version 4.x and earlier) or mlocate (version 5.x) RPMs installed? If you’re uncertain, check using the following:

> rpm -q slocate
slocate-2.7-13.el4.8.i386

or

> rpm -q mlocate
mlocate-0.15-1.el5.2.x86_64

If so, multiple RHEL VMs plus mlocate or slocate may be adding up to an array-crushing 4:02am shared storage load and latency spike for you. Before being addressed, this spike was bad enough at my place of employment (when combined with a NetApp Sunday-morning disk scrub) to cause a Windows VM to crash with I/O errors. Ouch.

Details and ideas for resolution:

By default, a line in /etc/crontab runs the scripts within /etc/cron.daily at 4:02am each morning:

02 4 * * * root run-parts /etc/cron.daily

One of those scripts – mlocate.cron or slocate.cron, depending on your OS version – launches updatedb; as the man page says, “updatedb creates or updates a database used by locate(1).” (The “locate” binary is a filesystem search tool, see “man locate” for more information.) Updatedb refreshes its database by walking the filesystem, generating a fair amount of I/O on a single system. Imagine upwards of thirty of these running in parallel through VMDKs on one shared storage system carrying out internal maintenance at the same time, and you’re pretty much picturing the problem my employer had.

I see three options for addressing this issue:

1) Uninstall mlocate or slocate. If you don’t currently use “locate” and you’re not interested in learning to use a tool that will likely make you more effective at your job (again, see “man locate”), this is probably the best option. (Yeah, I know, people that fit this bill generally don’t read blogs more technical than this one, so I could probably have skipped it here. Consider it an option for completeness, or if you really need to strip down an install.)

2) Disable the scheduled job by removing mlocate.cron or slocate.cron from /etc/cron.daily. This keeps locate available for your use, but requires that you update locate’s database ad-hoc and interactively by running the following as root:

# updatedb

This will take a few minutes to return, depending on the size of your file systems.

I don’t recommend this option either; at least it doesn’t fit the way I work. I often find myself using locate in high-pressure situations in which I need to quickly get a file location on a system. Waiting minutes for updatedb to return is extra painful when every second counts.

3) Stagger when updatedb runs by inserting a random delay into the script.. This is my preferred alternative; locate’s database is kept current automatically, and your storage doesn’t have to bear a sudden spike in load. I implemented this by adding the lines in bold (lines 2-7 if your browser doesn’t display the bold text clearly):

#!/bin/sh
# sleep up to two hours before launching job:
value=$RANDOM
while [ $value -gt 7200 ] ; do
value=$RANDOM
done
sleep $value

nodevs=$( renice +19 -p $$ >/dev/null 2>&1
/usr/bin/updatedb -f "$nodevs"

The added code inserts a pseudo-random sleep delay of up to two hours before updatedb runs, with the key being the built-in Bash function $RANDOM. In our environment, this removed a 2000 IOPS spike at 4:02am, and eliminated a corresponding jump in filer latency. Obviously, adjust the delay period as appropriate for your environment. Additionally, be sure to add this change to your configuration management or installation management tools so that all of your RHEL and RHEL-derived VMs get the updated script.

Using $RANDOM to avoid this variant of the thundering herd problem also works nicely for a range of similar problems; I believe I first saw it at Moundalexis.com.

(This problem may apply to other Linux distributions being run as VMs, and FreeBSD does something equivalent – weekly – with /etc/periodic/weekly/310.locate. A similar solution can be applied to these environments, if necessary.)


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images