[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091012093920.GA2480@localhost>
Date: Mon, 12 Oct 2009 17:39:21 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@...ibm.com>,
Jens Axboe <jens.axboe@...cle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: make VM_MAX_READAHEAD configurable
On Mon, Oct 12, 2009 at 05:29:48PM +0800, Christian Ehrhardt wrote:
> Wu Fengguang wrote:
> > [SNIP]
> >>> May I ask for more details about your performance regression and why
> >>> it is related to readahead size? (we didn't change VM_MAX_READAHEAD..)
> >>>
> >>>
> >> Sure, the performance regression appeared when comparing Novell SLES10
> >> vs. SLES11.
> >> While you are right Wu that the upstream default never changed so far,
> >> SLES10 had a
> >> patch applied that set 512.
> >>
> >
> > I see. I'm curious why SLES11 removed that patch. Did it experienced
> > some regressions with the larger readahead size?
> >
> >
>
> Only the obvious expected one with very low free/cacheable
> memory and a lot of parallel processes that do sequential I/O.
> The RA size scales up for all of them but 64xMaxRA then
> doesn't fit.
>
> For example iozone with 64 threads (each on one disk for its own),
> sequential access pattern read with I guess 10 M free for cache
> suffered by ~15% due to trashing.
FYI, I just finished with a patch for dealing with readahead
thrashing. Will do some tests and post the result :)
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists