Tuesday, 28 July 2009

Sen. Conroy and his sinister Internet filter

Monday, 22 June 2009

Disk IO spikes caused by mod_proxy.c in Apache 1.3

We have been using Apache 1.3 with mod_proxy as a reverse proxy for some of our applications. The configuration is pretty simple:

CacheRoot /var/adm/apache/cache
CacheSize 4096000
CacheGcInterval 4
CacheMaxExpire 48
CacheDefaultExpire 2

The above tells Apache to inspect the cache directory every 4 hours, keep up to 4GB worth of data, validate objects every 48 hours (no matter what the origin server set the expiry to) and expire them, by default, in 2 hours.

The typical content of the cache directory for our sites is 4.5GB in about 180,000 directories and 310,000 files.

Shortly after deploying this to production we started seeing the pattern below for the load on all web servers:
Because this was happening in regular intervals it was easy to inspect one of the servers during the load spike. The sar data had already told us the load was caused by disk IO:

$sar -b
tps rtps wtps bread/s bwrtn/s
11:30:01 AM 94.53 1.89 92.63 27.68 1275.66
11:40:01 AM 1289.34 57.63 1231.71 555.35 20566.36
11:50:01 AM 79.16 1.23 77.93 21.88 1084.00
12:00:02 PM 94.21 1.77 92.44 39.52 1274.74
12:10:01 PM 84.18 1.64 82.54 35.54 1116.60
12:20:01 PM 117.43 2.62 114.81 44.68 1550.87
12:30:01 PM 150.65 3.30 147.35 82.21 1986.41
12:40:01 PM 1228.90 1.53 1227.37 33.04 20592.15
12:50:01 PM 70.46 2.65 67.80 100.57 917.25
When the load spiked I checked which processes were high on "wa" (waiting for IO) time and listed their open files with lsof.

Sure enough I could see httpd processes with changing files open in /var/adm/apache/cache. This is expected since the cache garbage collector needs to inspect cached objects and decide what to delete in order to stay under the CacheSize limit.

What surprised me is how amazingly expensive this operation is on ext3 filesystems.

To alleviate the problem I have changed some of the filesystem parameters. In /etc/fstab I told the OS to not update access times on files and directories under /var:
/dev/VolGroup00/LogVol04    /var   ext3    defaults,noatime,nodiratime        1 2
Without noatime and nodiratime every file and directory read meant a disk write to update the access time. Since we don't have a use for access times in /var I did away with them.

With tune2fs I change the data option from the default (ordered) to writeback:
# tune2fs -o journal_data_writeback /dev/mapper/VolGroup00-LogVol04
The writeback mode can cause file corruption on unclean shutdowns but the benefits outweights the risk for us. Our web content in /var is replicated from a master server so we would only be looking into losing non-critical data such as some state files and access log entries.

After these changes the GC is barely noticed. If you pay attention to the graph above you will see that there is no load spike for 13:00 onwards.

Monday, 25 May 2009

Apache 1.3.41 HostnameLookups bug?

This weekend a scheduled maintenance in the data centre took out the name servers. I wasn't worried because all the names we needed to resolve in order to keep serving pages were in /etc/hosts.

Sure enough, during the maintenance alarms started going off. Web pages, static or dynamic, were taking in excess of 5 seconds to be served. We checked /etc/nsswitch.conf to make sure forward lookups were being resolved by /etc/hosts instead of DNS:

hosts: files dns

We then inspected the access log and we could see that some entries had host names instead of IP addresses. This was odd since reverse DNS lookups were turned off in Apache:

HostnameLookups Off

A colleague suggested it could be our LogFormat configuration:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

The documentation is pretty clear that %h will only do a reverse lookup if HostnameLookups is turned on:
If HostnameLookups is set to On, then the server will try to determine the hostname and log it in place of the IP address.
Nevertheless, we replaced %h with %a. That didn't solve the issue - requests still took little more than 5 seconds to be fulfilled. The millisecond part of the response time looked right to me, so I went on to read the resolv.conf man page and found this bit very interesting:
sets the amount of time the resolver will wait for a response from a remote name server before retrying the query via a different name server. Measured in seconds, the default is RES_TIMEOUT (currently 5, see <resolv.h>).
Now I was pretty much convinced that Apache was playing tricks on us. I double-checked all our Allow statements to make sure we weren't using hostnames, since the manual says it will do a reverse lookup in this case:
This configuration will cause the server to perform a double reverse DNS lookup on the client IP address, regardless of the setting of the HostnameLookups directive.
We only had Allow statements for the usual restricted URLs (server-status etc) and even then, were only using IP addresses.

I decided to disable the lookups in source code, since this was starting to look like a bug. In src/main/http_core.c, line 728, I commented the reverse lookup call and replaced it with a fake response:

// hptr = gethostbyaddr((char *)iaddr, sizeof(struct in_addr), AF_INET);
hptr = NULL;

I did the same on line 825:
// hptr = gethostbyaddr((char *)iaddr, sizeof(struct in_addr), AF_INET);
hptr = NULL;

The web server went back to serving static and dynamic objects in the normal time, tens of milliseconds. I deployed the newly compiled Apache to all servers and went back to my weekend.

This hack obviously breaks all functionality that depends on reverse IP address lookups so, if I have some time, I will analyse this condition and, if it is a bug, submit a patch to the chaps at the Apache Foundation.

Thursday, 23 April 2009

QEMU/KVM mouse not working?

This guy knows the answer:

For me, because I use a physical device for a disk and didn't want to give myself permission on the device:
$ sudo SDL_VIDEO_X11_DGAMOUSE=0 qemu-launcher
My mouse stopped working after I configured Xinerama and display rotation on X (I had alsoi upgraded to Ubuntu 9.04 but I didn't test my VM before fiddling with xorg.conf).

Friday, 3 April 2009

XOrg configuration for Samsung LA32A450C1

This guy nailed it. I had only just found the manual page for my Samsung 32" LCD TV and was still googling for a tool to find the modeline.

I've only added the option to turn off nvidia's logo to his X configuration. Too good:

Section "Monitor"
Identifier "SamsungTV"
ModelName "Samsung LCD TV"
ModeLine "1360x768@60" 85.8 1360 1440 1552 1792 768 771 777 795 +hsync +vsync
Option "dpms"

Section "Device"
Identifier "NVIDIAVideocard0"
Driver "nvidia"
Option "ModeValidation" "DFP-0: NoDFPNativeResolutionCheck"
Option "ExactModeTimingsDVI" "TRUE"
#### 100 DPI
Option "UseEDIDDpi" "FALSE"
Option "DPI" "100 x 100"
Option "NoLogo" "True"

Section "Screen"
Identifier "Screen0"
Device "NVIDIAVideocard0"
Monitor "SamsungTV"
DefaultDepth 24
SubSection "Display"
Depth 24
##### Native Res Samsung 1360x768@60
Modes "1360x768@60" "1280x720" "1024x768" "800x600" "640x480"

Friday, 16 January 2009

RHN's yum update fails

I've noticed that quite often Red Hat's yum will fail for no good reason. For example it will just show an error like:

# yum update

Error Downloading Packages:

samba-common - 3.0.28-1.el5_2.1.x86_64: failed to retrieve getPackage/samba-common-3.0.28-1.el5_2.1.x86_64.rpm from rhel-x86_64-server-5
error was [Errno 14] HTTP Error 404: Not Found

When that's the case, retrying a few times usually does the trick.

I have actually opened a ticket with RHN's support and they've told me these are "temporary errors". Temporary or not they are pretty annoying so here's what I do.

1. Retry manually until you're happy with the list of packages that will be updated
2. Run yum in a while loop so that it retries until it succeeds:

# while ! (yum update -y) ; do echo TRY AGAIN; done

Be aware that if that too many retries can cause your RHN subscription to be temporarily locked-out. I find the benefit too great to care.