[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8d35bbf9-5843-39be-c429-3c43108520d3@intel.com>
Date: Wed, 28 Jun 2023 10:59:22 +0800
From: Yin Fengwei <fengwei.yin@...el.com>
To: <akpm@...ux-foundation.org>, <mike.kravetz@...cle.com>,
<willy@...radead.org>, <ackerleytng@...gle.com>,
<linux-fsdevel@...r.kernel.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
CC: kernel test robot <oliver.sang@...el.com>
Subject: Re: [PATCH] readahead: Correct the start and size in
ondemand_readahead()
On 6/27/23 13:07, Yin Fengwei wrote:
> The commit
> 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one")
> updated the page_cache_next_miss() to return the index beyond
> range.
>
> But it breaks the start/size of ra in ondemand_readahead() because
> the offset by one is accumulated to readahead_index. As a consequence,
> not best readahead order is picked.
>
> Tracing of the order parameter of filemap_alloc_folio() showed:
> page order : count distribution
> 0 : 892073 | |
> 1 : 0 | |
> 2 : 65120457 |****************************************|
> 3 : 32914005 |******************** |
> 4 : 33020991 |******************** |
> with 9425c591e06a9.
>
> With parent commit:
> page order : count distribution
> 0 : 3417288 |**** |
> 1 : 0 | |
> 2 : 877012 |* |
> 3 : 288 | |
> 4 : 5607522 |******* |
> 5 : 29974228 |****************************************|
>
> Fix the issue by set correct start/size of ra in ondemand_readahead().
>
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com
> Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one")
> Signed-off-by: Yin Fengwei <fengwei.yin@...el.com>
> ---
> mm/readahead.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 47afbca1d122e..a1b8c628851a9 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -614,11 +614,11 @@ static void ondemand_readahead(struct readahead_control *ractl,
> max_pages);
> rcu_read_unlock();
>
> - if (!start || start - index > max_pages)
> + if (!start || start - index - 1 > max_pages)
> return;
The offset by one only happens when no gaps in the range. So this patch need an update.
I will send out v2 soon. Thanks.
Regards
Yin, Fengwei
>
> - ra->start = start;
> - ra->size = start - index; /* old async_size */
> + ra->start = start - 1;
> + ra->size = start - index - 1; /* old async_size */
> ra->size += req_size;
> ra->size = get_next_ra_size(ra, max_pages);
> ra->async_size = ra->size;
Powered by blists - more mailing lists