[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240614092602.jc5qeoxy24xj6kl7@quentin>
Date: Fri, 14 Jun 2024 09:26:02 +0000
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: david@...morbit.com, djwong@...nel.org, chandan.babu@...cle.com,
brauner@...nel.org, akpm@...ux-foundation.org, mcgrof@...nel.org,
linux-mm@...ck.org, hare@...e.de, linux-kernel@...r.kernel.org,
yang@...amperecomputing.com, Zi Yan <zi.yan@...t.com>,
linux-xfs@...r.kernel.org, p.raghav@...sung.com,
linux-fsdevel@...r.kernel.org, hch@....de, gost.dev@...sung.com,
cl@...amperecomputing.com, john.g.garry@...cle.com
Subject: Re: [PATCH v7 04/11] readahead: allocate folios with
mapping_min_order in readahead
On Wed, Jun 12, 2024 at 07:50:49PM +0100, Matthew Wilcox wrote:
> On Fri, Jun 07, 2024 at 02:58:55PM +0000, Pankaj Raghav (Samsung) wrote:
> > @@ -230,7 +247,9 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> > struct folio *folio = xa_load(&mapping->i_pages, index + i);
> > int ret;
> >
> > +
>
> Spurious newline
Oops.
>
> > if (folio && !xa_is_value(folio)) {
> > + long nr_pages = folio_nr_pages(folio);
>
> Hm, but we don't have a reference on this folio. So this isn't safe.
>
That is why I added a check for mapping after read_pages(). You are
right, we can make it better.
> > @@ -240,12 +259,24 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> > * not worth getting one just for that.
> > */
> > read_pages(ractl);
> > - ractl->_index += folio_nr_pages(folio);
> > +
> > + /*
> > + * Move the ractl->_index by at least min_pages
> > + * if the folio got truncated to respect the
> > + * alignment constraint in the page cache.
> > + *
> > + */
> > + if (mapping != folio->mapping)
> > + nr_pages = min_nrpages;
> > +
> > + VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio);
> > + ractl->_index += nr_pages;
>
> Why not just:
> ractl->_index += min_nrpages;
Then we will only move min_nrpages even if the folio we found had a
bigger order. Hannes patches (first patch) made sure we move the
ractl->index by folio_nr_pages instead of 1 and making this change will
defeat the purpose because without mapping order set, min_nrpages will
be 1.
What I could do is the follows:
diff --git a/mm/readahead.c b/mm/readahead.c
index 389cd802da63..92cf45cdb4d3 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -249,7 +249,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
if (folio && !xa_is_value(folio)) {
- long nr_pages = folio_nr_pages(folio);
+ long nr_pages;
/*
* Page already present? Kick off the current batch
* of contiguous pages before continuing with the
@@ -266,10 +266,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
* alignment constraint in the page cache.
*
*/
- if (mapping != folio->mapping)
- nr_pages = min_nrpages;
+ nr_pages = max(folio_nr_pages(folio), (long)min_nrpages);
- VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio);
ractl->_index += nr_pages;
i = ractl->_index + ractl->_nr_pages - index;
continue;
Now we will still move respecting the min order constraint but if we had
a bigger folio and we do have a reference, then we move folio_nr_pages.
>
> like you do below?
Below we add a folio of min_order, so if that fails for some reason, we
can unconditionally move min_nrpages.
Powered by blists - more mailing lists