The expanses of WolfWings' land
scratched on the wall for all to see


November 2nd, 2009
November 2nd, 2009
November 2nd, 2009
November 2nd, 2009
November 2nd, 2009

[User Picture]05:23 am - Linux thoughts... somewhat stream of consciousness here, forgive me.

So... I'm starting to really tune my system for performance lately, and I've been researching up-and-coming changes, both to Linux as a whole, and particular trends in applications relevant to my interests of web-development and general server-fu.

Right now, we commonly end up with:

Apache
MySQL upgrading to Oracle (and this was well before the buy-out no less)
Any number of frameworks at this level, JBoss, PHP, Perl, what-have-you

Most of these servers will still follow the old hide-bound metric of twice the swap you have physical RAM, some of them capping swap at 2-4GB regardless, and most memory-management settings left at the kernel defaults that haven't changed since 16MB was considered good.

Now, there's various changes on the horizon:

Slowloris is making Linux people look into all sorts of ways to wrap Apache, or alternatives. Some try out lighttpd (load-proportional memory leaks drop it out of the running), or front-end load balancers like perlbal or pound. I'm noticing a lot of sites that get POUNDED have been suddenly switching to a single tool regardless of it they need a load balancer, a web server, or just a reverse proxy: nginx, which directly supports tying into a LOT of secondary tools without intermediary building blocks to assemble and maintain.

The tipping-point migration to virtual machines is making people care more about using the least RAM possibly on their servers, so they can enjoy overcommit and let the underlying storage platform do all the cache magic seperately from their server images. This is drastically changing what are appropriate memory-allocation settings on Linux.

The recent surge in VM's is also making a particular technology suddenly attractive: Dynamic merging of identical memory pages. This allows MASSIVE numbers of mostly-identical virtual machines to be loaded on a single host server, on the order of twice or three times the physical RAM of the host server.

Surprisingly related to the above, the surge in more mobile Linux platform use has made another idea pop up and seem a lot more relevant finally: Compressed in-memory swap as a pre-buffer before hitting physical swap.

Especially combined together, the old default settings for the kernel regarding memory allocation are so far out of whack on any computer with more RAM by 1-2G than it needs for it's working set, that I figured I'd post my suggestions based on my understanding of things as they sit, and where they're going:

First, don't worry about compcache or KMS. They'll pop up eventually, enjoy them when they do.

If you have 'enough' RAM, disable overcommit entirely, *BUT* allow the vast majority of your physical RAM to actually get used before things start failing:

vm.overcommit_memory = 2
vm.overcommit_ratio = 95

If you're on a laptop, or a VM, force the kernel to purge disk cache before ever touching swap:

vm.swappiness = 0

Now, the dirty-page commit timings. # below should be 1 if you have 1GB of RAM or more. If less than 1GB of RAM, go buy more RAM. This will mostly disable the 'periodic' flush, and rely almost entirely on flushing every time the pool reaches a percentage of system memory:

vm.dirty_ratio = 1
vm.dirty_background_ratio = 1
vm.dirty_writeback_centisecs = 0
vm.dirty_expire_centisecs = 6000

And a final tweak to force dropping of 'data' pages from the page cache before dropping inode/directory pages:

vm.vfs_cache_pressure = 1

Now, a subtle but important tweak you should set in all your VM's, disable the I/O Scheduler almost entirely. The reasons for this are akin to why tunneling TCP over TCP is such a bad idea:

for BLOCK in /sys/block/*/queue/scheduler; do echo noop > ${BLOCK}; done

Now... one last trick I've started doing on VM's: Set up a once-a-minute script that sends '1' to /proc/sys/vm/drop_caches. This forces 'clean' cache pages, I.E. read-only cache pages, to be turned back into free memory, but leaves directory and inode pages in place. Especially once KSM shows up, this can maintain VM's from chewing up all their respective RAM just from reading and writing files on the backing storage.

1 commentLeave a comment
?

Log in

No account? Create an account