[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091230051540.GA16308@localhost>
Date: Wed, 30 Dec 2009 13:15:40 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: Quentin Barnes <qbarnes+nfs@...oo-inc.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, Nick Piggin <npiggin@...e.de>,
Steven Whitehouse <swhiteho@...hat.com>
Subject: Re: [RFC][PATCH] Disabling read-ahead makes I/O of large reads
small
Andi,
On Wed, Dec 30, 2009 at 02:04:43AM +0800, Andi Kleen wrote:
> Quentin Barnes <qbarnes+nfs@...oo-inc.com> writes:
>
> cc fengguang who is Mr.Readahead. The full description+patch
> is in the archives.
Thank you for the CC.
> > In porting some application code to Linux, its performance over
> > NFSv3 on Linux is terrible. I'm posting this note to LKML since
> > the problem was actually tracked back to the VFS layer.
> [...]
> > I have no idea if my patch is the appropriate fix. I'm well out of
> > my area in this part of the kernel. It solves this one problem, but
> > I have no idea how many boundary cases it doesn't cover or even if
> > it is the right way to go about addressing this issue.
> >
> > Is this behavior of shorting I/O of read(2) considered a bug? And
> > is this approach for a fix approriate?
>
> It sounds like a (performance) bug to me.
Yes it's a bug. It hit my mind in some early days.. I should be blamed
to lose track of it.
> >From a quick look your fix looks reasonable to me.
Yes, it's reasonable to directly call force_page_cache_readahead() in
this case.
However the ra_pages=0 trick in fadvise also asks for fix. We'd better
let it set a readahead flag, because ra_pages=0 is used in many other
places to really disable the (heuristic|force) readahead. See the
second patch's description for more details.
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists