[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZbMDO5mkAFmN2LHz@google.com>
Date: Thu, 25 Jan 2024 16:56:27 -0800
From: Chris Li <chrisl@...nel.org>
To: Kairui Song <ryncsn@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Yu Zhao <yuzhao@...gle.com>, Wei Xu <weixugc@...gle.com>,
Matthew Wilcox <willy@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/3] mm, lru_gen: try to prefetch next page when
scanning LRU
On Fri, Jan 26, 2024 at 01:51:44AM +0800, Kairui Song wrote:
> > > mm/vmscan.c | 30 ++++++++++++++++++++++++++----
> > > 1 file changed, 26 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 4f9c854ce6cc..03631cedb3ab 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -3681,15 +3681,26 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap)
> > > /* prevent cold/hot inversion if force_scan is true */
> > > for (zone = 0; zone < MAX_NR_ZONES; zone++) {
> > > struct list_head *head = &lrugen->folios[old_gen][type][zone];
> > > + struct folio *prev = NULL;
> > >
> > > - while (!list_empty(head)) {
> > > - struct folio *folio = lru_to_folio(head);
> > > + if (!list_empty(head))
> > > + prev = lru_to_folio(head);
> > > +
> > > + while (prev) {
> > > + struct folio *folio = prev;
> > >
> > > VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio);
> > > VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio);
> > > VM_WARN_ON_ONCE_FOLIO(folio_is_file_lru(folio) != type, folio);
> > > VM_WARN_ON_ONCE_FOLIO(folio_zonenum(folio) != zone, folio);
> > >
> > > + if (unlikely(list_is_first(&folio->lru, head))) {
> > > + prev = NULL;
> > > + } else {
> > > + prev = lru_to_folio(&folio->lru);
> > > + prefetchw(&prev->flags);
> > > + }
> >
> > This makes the code flow much harder to follow. Also for architecture
> > that does not support prefetch, this will be a net loss.
> >
> > Can you use refetchw_prev_lru_folio() instead? It will make the code
> > much easier to follow. It also turns into no-op when prefetch is not
> > supported.
> >
> > Chris
> >
>
> Hi Chris,
>
> Thanks for the suggestion.
>
> Yes, that's doable, I made it this way because in previous series (V1
> & V2) I applied the bulk move patch first which needed and introduced
> the `prev` variable here, so the prefetch logic just used it.
> For V3 I did a rebase and moved the prefetch commit to be the first
> one, since it seems to be the most effective one, and just kept the
Maybe something like this? Totally not tested. Feel free to use it any way you want.
Chris
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4f9c854ce6cc..2100e786ccc6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3684,6 +3684,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap)
while (!list_empty(head)) {
struct folio *folio = lru_to_folio(head);
+ prefetchw_prev_lru_folio(folio, head, flags);
VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio);
VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio);
@@ -4346,7 +4347,10 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
while (!list_empty(head)) {
struct folio *folio = lru_to_folio(head);
- int delta = folio_nr_pages(folio);
+ int delta;
+
+ prefetchw_prev_lru_folio(folio, head, flags);
+ delta = folio_nr_pages(folio);
VM_WARN_ON_ONCE_FOLIO(folio_test_unevictable(folio), folio);
VM_WARN_ON_ONCE_FOLIO(folio_test_active(folio), folio);
Powered by blists - more mailing lists