[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0707281400580.32476@asgard.lang.hm>
Date: Sat, 28 Jul 2007 14:03:01 -0700 (PDT)
From: david@...g.hm
To: Alan Cox <alan@...rguk.ukuu.org.uk>
cc: Rene Herman <rene.herman@...il.com>,
Daniel Hazelton <dhazelton@...er.net>,
Mike Galbraith <efault@....de>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Frank Kingswood <frank@...gswood-consulting.co.uk>,
Andi Kleen <andi@...stfloor.org>,
Nick Piggin <nickpiggin@...oo.com.au>,
Ray Lee <ray-lk@...rabbit.org>,
Jesper Juhl <jesper.juhl@...il.com>,
ck list <ck@....kolivas.org>, Paul Jackson <pj@....com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: RFT: updatedb "morning after" problem [was: Re: -mm merge plans
for 2.6.23]
On Sat, 28 Jul 2007, Alan Cox wrote:
>> It is. Prefetched pages can be dropped on the floor without additional I/O.
>
> Which is essentially free for most cases. In addition your disk access
> may well have been in idle time (and should be for this sort of stuff)
> and if it was in the same chunk as something nearby was effectively free
> anyway.
as I understand it the swap-prefetch only kicks in if the device is idle
> Actual physical disk ops are precious resource and anything that mostly
> reduces the number will be a win - not to stay swap prefetch is the right
> answer but accidentally or otherwise there are good reasons it may happen
> to help.
>
> Bigger more linear chunks of writeout/readin is much more important I
> suspect than swap prefetching.
I'm sure this is true while you are doing the swapout or swapin and the
system is waiting for it. but with prefetch you may be able to avoid doing
the swapin at a time when the system is waiting for it by doing it at a
time when the system is otherwise idle.
David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists