[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120131202311.GB4378@redhat.com>
Date: Tue, 31 Jan 2012 15:23:11 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Shaohua Li <shaohua.li@...el.com>
Cc: lkml <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>,
Herbert Poetzl <herbert@...hfloor.at>,
Eric Dumazet <eric.dumazet@...il.com>,
Wu Fengguang <wfg@...ux.intel.com>
Subject: Re: [PATCH] fix readahead pipeline break caused by block plug
On Tue, Jan 31, 2012 at 09:47:34AM -0500, Vivek Goyal wrote:
> On Tue, Jan 31, 2012 at 03:59:40PM +0800, Shaohua Li wrote:
> > Herbert Poetzl reported a performance regression since 2.6.39. The test
> > is a simple dd read, but with big block size. The reason is:
> >
> > T1: ra (A, A+128k), (A+128k, A+256k)
> > T2: lock_page for page A, submit the 256k
> > T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
> > because of plug and there isn't any lock_page till we hit page A+256k
> > because all pages from A to A+256k is in memory
> > T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
> > submitted again.
>
> Why IO is not submitted because of plug? Doesn't task now get scheduled
> out causing an unplug? IOW, are we now busy waiting somewhere preventing
> unplug?
Ok, after putting some trace points I think now I understand what is
happening.
We submit some readahead IO to device request queue but because of nested
plug, queue never gets unplugged. When read logic reaches a page which is
not in page cache, it waits for page to be read from the disk
(lock_page_killable()) and that time we flush the plug list.
So effectively read ahead logic is kind of broken in parts because of
nested plugging. Removing top level plug (generic_file_aio_read()) for
buffered reads, will allow unplugging queue earlier for readahead.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists