[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAOUHufaO38q3LFcdXR3HjSC9jK=OtFS=aJxUhetKZZiAF-Cf4g@mail.gmail.com>
Date: Tue, 14 Jan 2025 20:59:04 -0700
From: Yu Zhao <yuzhao@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Chen Ridong <chenridong@...weicloud.com>, akpm@...ux-foundation.org, mhocko@...e.com,
yosryahmed@...gle.com, david@...hat.com, willy@...radead.org,
ryan.roberts@....com, baohua@...nel.org, 21cnbao@...il.com,
wangkefeng.wang@...wei.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
chenridong@...wei.com, wangweiyang2@...wei.com, xieym_ict@...mail.com
Subject: Re: [PATCH v7 mm-unstable] mm: vmscan: retry folios written back
while isolated for traditional LRU
On Mon, Jan 13, 2025 at 8:52 AM Johannes Weiner <hannes@...xchg.org> wrote:
>
> On Sat, Jan 11, 2025 at 09:15:04AM +0000, Chen Ridong wrote:
> > @@ -5706,6 +5706,44 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *
> >
> > #endif /* CONFIG_LRU_GEN */
> >
> > +/**
> > + * find_folios_written_back - Find and move the written back folios to a new list.
> > + * @list: filios list
> > + * @clean: the written back folios list
> > + * @lruvec: the lruvec
> > + * @type: LRU_GEN_ANON/LRU_GEN_FILE, only for multi-gen LRU
> > + * @skip_retry: whether skip retry.
> > + */
> > +static inline void find_folios_written_back(struct list_head *list,
> > + struct list_head *clean, struct lruvec *lruvec, int type, bool skip_retry)
> > +{
> > + struct folio *folio;
> > + struct folio *next;
> > +
> > + list_for_each_entry_safe_reverse(folio, next, list, lru) {
> > +#ifdef CONFIG_LRU_GEN
> > + DEFINE_MIN_SEQ(lruvec);
> > +#endif
> > + if (!folio_evictable(folio)) {
> > + list_del(&folio->lru);
> > + folio_putback_lru(folio);
> > + continue;
> > + }
> > +
> > + /* retry folios that may have missed folio_rotate_reclaimable() */
> > + if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) &&
> > + !folio_test_dirty(folio) && !folio_test_writeback(folio)) {
> > + list_move(&folio->lru, clean);
> > + continue;
> > + }
> > +#ifdef CONFIG_LRU_GEN
> > + /* don't add rejected folios to the oldest generation */
> > + if (lru_gen_enabled() && lru_gen_folio_seq(lruvec, folio, false) == min_seq[type])
> > + set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active));
> > +#endif
> > + }
>
> Can't this solved much more easily by acting on the flag in the
> generic LRU add/putback path? Instead of walking the list again.
>
> Especially with Kirill's "[PATCH 0/8] mm: Remove PG_reclaim" that
> removes the PG_readahead ambiguity.
I don't follow -- my understanding is that with Kirill's series, there
is no need to do anything for the generic path. (I'll remove the retry
in MGLRU which Kirill left behind.)
This approach actually came up during the discussions while I was
looking at the problem. My concern back then was that shifting the
work from the reclaim path to the writeback path can reduce the
overall writeback throughput. IIRC, there were regressions with how I
implemented it. Let me try finding my notes and see if those
regressions still exist with Jen and Kirill's implementation.
Powered by blists - more mailing lists