[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <nepi5e74wtghvr6a6n26rdgqaa7tzitylyoamfnzoqu6s5gq4h@zqtve2irigd6>
Date: Thu, 8 May 2025 14:55:43 +0200
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>, Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>, David Hildenbrand <david@...hat.com>,
Dave Chinner <david@...morbit.com>, Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Kalesh Singh <kaleshsingh@...gle.com>, Zi Yan <ziy@...dia.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC PATCH v4 1/5] mm/readahead: Honour new_order in
page_cache_ra_order()
Hey Ryan,
On Wed, Apr 30, 2025 at 03:59:14PM +0100, Ryan Roberts wrote:
> FOLIO 0x0001a000 0x0001b000 4096 26 27 1 0
> FOLIO 0x0001b000 0x0001c000 4096 27 28 1 0
> FOLIO 0x0001c000 0x0001d000 4096 28 29 1 0
> FOLIO 0x0001d000 0x0001e000 4096 29 30 1 0
> FOLIO 0x0001e000 0x0001f000 4096 30 31 1 0
> FOLIO 0x0001f000 0x00020000 4096 31 32 1 0
> FOLIO 0x00020000 0x00024000 16384 32 36 4 2
> FOLIO 0x00024000 0x00028000 16384 36 40 4 2
> FOLIO 0x00028000 0x0002c000 16384 40 44 4 2
> FOLIO 0x0002c000 0x00030000 16384 44 48 4 2
> ...
>
> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> ---
> mm/readahead.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 6a4e96b69702..8bb316f5a842 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -479,9 +479,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
>
So we always had a fallback to do_page_cache_ra() if the size of the
readahead is less than 4 pages (16k). I think this was there because we
were adding `2` to the new_order:
unsigned int min_ra_size = max(4, mapping_min_folio_nrpages(mapping));
/*
* Fallback when size < min_nrpages as each folio should be
* at least min_nrpages anyway.
*/
if (!mapping_large_folio_support(mapping) || ra->size < min_ra_size)
goto fallback;
> limit = min(limit, index + ra->size - 1);
>
> - if (new_order < mapping_max_folio_order(mapping))
> - new_order += 2;
Now that you have moved this, we could make the lhs of the max to be 2
(8k) instead of 4(16k).
- unsigned int min_ra_size = max(4, mapping_min_folio_nrpages(mapping));
+ unsigned int min_ra_size = max(2, mapping_min_folio_nrpages(mapping));
I think if we do that, we might ramp up to 8k sooner rather than jumping
from 4k to 16k directly?
> -
> new_order = min(mapping_max_folio_order(mapping), new_order);
> new_order = min_t(unsigned int, new_order, ilog2(ra->size));
> new_order = max(new_order, min_order);
> @@ -683,6 +680,7 @@ void page_cache_async_ra(struct readahead_control *ractl,
> ra->size = get_next_ra_size(ra, max_pages);
> ra->async_size = ra->size;
> readit:
> + order += 2;
> ractl->_index = ra->start;
> page_cache_ra_order(ractl, ra, order);
> }
> --
> 2.43.0
>
--
Pankaj
Powered by blists - more mailing lists