[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4573FBD1.8050802@yahoo.com.au>
Date: Mon, 04 Dec 2006 21:43:29 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Aucoin@...ston.RR.com
CC: 'Tim Schmielau' <tim@...sik3.uni-rostock.de>,
'Andrew Morton' <akpm@...l.org>, torvalds@...l.org,
linux-kernel@...r.kernel.org, clameter@....com
Subject: Re: la la la la ... swappiness
Aucoin wrote:
> We want it to swap less for this particular operation because it is low
> priority compared to the rest of what's going on inside the box.
>
> We've considered both artificially manipulating swap on the fly similar to
> your suggestion as well a parallel thread that pumps a 3 into drop_caches
> every few seconds while the update is running, but these seem too much like
> hacks for our liking. Mind you, if we don't have a choice we'll do what we
> need to get the job done but there's a nagging voice in our conscience that
> says keep looking for a more elegant solution and work *with* the kernel
> rather than working against it or trying to trick it into doing what we
> want.
>
> We've already disabled OOM so we can at least keep our testing alive while
> searching for a more elegant solution. Although we want to avoid swap in
> this particular instance for this particular reason, in our hearts we agree
> with Andrew that swap can be your friend and get you out of a jam once in a
> while. Even more, we'd like to leave OOM active if we can because we want to
> be told when somebody's not being a good memory citizen.
>
> Some background, what we've done is carve up a huge chunk of memory that is
> shared between three resident processes as write cache for a proprietary
> block system layout that is part of a scalable storage architecture
> currently capable of RAID 0, 1, 5 (soon 6) virtualized across multiple
> chassis's, essentially treating each machine as a "disk" and providing
> multipath I/O to multiple iSCSI targets as part of a grid/array storage
> solution. Whew! We also have a version that leverages a battery backed write
> cache for higher performance at an additional cost. This software is
> installable on any commodity platform with 4-N disks supported by Linux,
> I've even put it on an Optiplex with 4 simulated disks. Yawn ... yet another
> iSCSI storage solution, but this one scales linearly in capacity as well as
> performance. As such, we have no user level apps on the boxes and precious
> little disk to spare for additional swap so our version of the swap
> manipulation solution is to turn swap completely off for the duration of the
> update.
>
> I hope I haven't muddied things up even more but basically what we want to
> do is find a way to limit the number of cached pages for disk I/O on the OS
> filesystem, even if it drastically slows down the untar and verify process
> because the disk I/O we really care about is not on any of the OS
> partitions.
Hi Louis,
We had customers see similar incorrect OOM problems, so I sent in some
patches merged after 2.6.16. Can you upgrade to latest kernel? (otherwise
I guess backporting could be an option for you).
Basically the fixes are more conservative about going OOM if the kernel
thinks it can still reclaim some pages, and also allow the kernel to swap
as a last resort, even if swappiness is set to 0.
Once your OOM problems are solved, I think that page reclaim should do a
reasonable job at evicting the right pages with your simple untar
workload.
Thanks,
Nick
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists