[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200907071619.GA28569@infradead.org>
Date: Mon, 7 Sep 2020 08:16:19 +0100
From: Christoph Hellwig <hch@...radead.org>
To: Bean Huo <huobean@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, beanhuo@...ron.com,
Richard Weinberger <richard@....at>,
linux-mtd@...ts.infradead.org
Subject: Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in
case of ra->ra_pages == 0
On Fri, Sep 04, 2020 at 04:48:07PM +0200, Bean Huo wrote:
> From: Bean Huo <beanhuo@...ron.com>
>
> Current generic_file_buffered_read() will break up the larger batches of pages
> and read data in single page length in case of ra->ra_pages == 0. This patch is
> to allow it to pass the batches of pages down to the device if the supported
> maximum IO size >= the requested size.
At least ubifs and mtd seem to force ra_pages = 0 to disable read-ahead
entirely, so this seems intentional.
Powered by blists - more mailing lists