Tuesday 27 July 2010

Varnish and the Linux IO bottleneck

There are 2 main ways Varnish caches your data:
  1. to memory (with the malloc storage config)
  2. to disk (with the file storage config)
As Kristian explains in a best practices post, you get better performance with #1 if your cache size fits in RAM. With either method the most accessed objects end-up in RAM. While the malloc method puts everything in RAM and the kernel swaps it out to disk, the file storage method puts everything in disk and the kernel caches it in memory.

We have been using the file storage for a little more than 3 months on servers with 16GB of RAM and a file storage of 30GB. Recently, because of an application configuration error, the size of the cache grew from its usual maximum of 6GB to 20GB+.

Since the system only has 16GB, the kernel now needs to choose which pages to keep in RAM. When a dirty mmap'ed page needs to be released the kernel will write the changes to disk before doing so. More than that, the linux kernel will proactively write dirty mmap'ed pages to disk.

On a Varnish server with this setup or any other application that uses mmap() for large files, the situation translates to constant disk activity.

At some point the varnish worker process will be blocked by a kernel IO call. The Varnish manager process, with no way to tell what's happening to the child, believes it has stopped responding and kills it. The log entries below are typical:

00:30:10.395590 varnishd[22919]: Child (22920) not responding to ping, killing it.
00:30:10.395622 varnishd[22919]: Child (22920) not responding to ping, killing it.
00:30:10.417309 varnishd[22919]: Child (22920) died signal=3

At this point you lose your cached data and the system goes back to normal. After some time, the size of the cache will grow to be larger than your server's RAM again, repeating the kill-restart cycle.

You could choose to give the client more time to respond to the manager process' ping requests. This is done by increasing the default value of cli_timeout from 10 seconds (e.g. varnish -p cli_timeout=20) but this is merely masking the issue. The real issue is that the Linux kernel is busy writing dirty pages to disk.

There are parameters you can use to control how much time the kernel spends writing dirty mmap pages to disk. I have spent some time fine-tuning the parameters below with little results:

/proc/sys/vm/dirty_writeback_centisecs
/proc/sys/vm/dirty_ratio
/proc/sys/vm/dirty_background_ratio

In RHEL 5.x kernels you can completely disable committing changes to mmap'ed files:

echo 0 > /proc/sys/vm/flush_mmap_pages

Short of reducing the file storage size to fit in RAM, this is the best solution I found so far. Be aware that using this with the upcoming persistent storage in Varnish is a really bad idea as you risk serving corrupt and/or stale data. This is only acceptable because the mmap'ed data, in this setup, is superfluous: varnish throws away the cache on restart and it can always fetch the object from the backend.

I have asked Red Hat if they plan to keep the flush_mmap_pages setting in RHEL 6 but haven't received a response yet.  They did, however, confirm that msync() calls are honoured and dirty pages being evicted from RAM are committed to disk.

Friday 30 April 2010

CruiseControl, ProxyPass

Dear blog, today I submitted a patch for CruiseControl. It feels great to give something back (even if it's half-baked).

I couldn't find a way to make CC give the correct URL to users when running in a configuration like below:

ProxyVia On
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
(...)

CC would insist on building URLs using localhost:8080 even though the user can only access via http://your.host.com/. With the patch CruiseControl will use the X-Forwarded-Host HTTP header to build the correct address.

Saturday 16 January 2010

The fallacy of cheap disk space

At work my concerns for application disk usage/waste are usually met with a scoff of "Why bother? Disk space is cheap!"

This is a myth I intend to destroy in this post. It is true that disk space gets cheaper all the time but it is not cheap.



I'm not even going to step into the NAS/SAN area - this myth can be debunked looking at local storage alone.

When people say disk space is cheap these days they are probably thinking of their videos, mp3 and pr0n collection at home; enterprise-grade disks are another story. I chose the cheapest drives I could find to illustrate my point:

Desktop: 1.5TB SATA 7.2krpm A$133 A$0.089/GB (WD15EADS)
Enterprise: 300GB SCSI 15krpm A$489 A$1.630/GB (SFU300G10K80P)

The comparison already looks grim for the myth: the cost of a data centre-worthy disk is more than 18 times that of a "disk space is cheap" drive.

In the real world redundancy is required so, for your typical RAID1/10 scenario, the cost is 36 times more or A$3.26/GB. You could get a better cost, A$2.45/GB, with a 3-disk RAID5 volume but the recovery time for today's disks makes the alternative risky.

Then you need to backup your data. Regardless of your backup frequency and retention period you will need to buy more tapes if you have more data.

Assuming that 1 LTO drive is able to backup all your data in the alloted time frame, a minimal retention period and 3x 200GB LTO-2 tape (in use, on-site, off-site), you are looking at an additional A$0.57 per GB backed-up and a total cost of A$3.83/GB - MORE THAN 40 TIMES the cost of cheap disk space.

One could argue that this level of performance AND redundancy is not always needed - and one would be right. For instance, if your application data is cacheable you can split the data in layers of increasing cost/performance with something like Blackblaze's solution at the back.

Once we're looking for cheaper storage the "disk space is cheap" advocates have already lost the argument. In case we're dealing with the stubborn kind let's move from the capital to the operational expense of storage.

The following statement is fairly obvious but probably needs to be reminded when storage cost is dismissed as irrelevant: disk space is an asset that depreciates at the purchase price, not market price. Once you've bought 1TB of storage for $A3.26 per GB you're stuck with the entry until it's out of the books, no matter how much cheaper disks get in the future.

Your backup opex also increases. You area going to rotate and store more tapes. You will need off-line storage and transportation for more tapes. You get the picture.

If your data is mirrored/replicated to other sites your network costs also increase.

More: because every procedure takes more time your labour costs also go up.

And more: enjoy while local storage meets your demand - the picture gets much uglier with external storage.