[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250113155206.GB829144@cmpxchg.org>
Date: Mon, 13 Jan 2025 10:52:06 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Chen Ridong <chenridong@...weicloud.com>
Cc: akpm@...ux-foundation.org, mhocko@...e.com, yosryahmed@...gle.com,
yuzhao@...gle.com, david@...hat.com, willy@...radead.org,
ryan.roberts@....com, baohua@...nel.org, 21cnbao@...il.com,
wangkefeng.wang@...wei.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, chenridong@...wei.com,
wangweiyang2@...wei.com, xieym_ict@...mail.com
Subject: Re: [PATCH v7 mm-unstable] mm: vmscan: retry folios written back
while isolated for traditional LRU
On Sat, Jan 11, 2025 at 09:15:04AM +0000, Chen Ridong wrote:
> @@ -5706,6 +5706,44 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *
>
> #endif /* CONFIG_LRU_GEN */
>
> +/**
> + * find_folios_written_back - Find and move the written back folios to a new list.
> + * @list: filios list
> + * @clean: the written back folios list
> + * @lruvec: the lruvec
> + * @type: LRU_GEN_ANON/LRU_GEN_FILE, only for multi-gen LRU
> + * @skip_retry: whether skip retry.
> + */
> +static inline void find_folios_written_back(struct list_head *list,
> + struct list_head *clean, struct lruvec *lruvec, int type, bool skip_retry)
> +{
> + struct folio *folio;
> + struct folio *next;
> +
> + list_for_each_entry_safe_reverse(folio, next, list, lru) {
> +#ifdef CONFIG_LRU_GEN
> + DEFINE_MIN_SEQ(lruvec);
> +#endif
> + if (!folio_evictable(folio)) {
> + list_del(&folio->lru);
> + folio_putback_lru(folio);
> + continue;
> + }
> +
> + /* retry folios that may have missed folio_rotate_reclaimable() */
> + if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) &&
> + !folio_test_dirty(folio) && !folio_test_writeback(folio)) {
> + list_move(&folio->lru, clean);
> + continue;
> + }
> +#ifdef CONFIG_LRU_GEN
> + /* don't add rejected folios to the oldest generation */
> + if (lru_gen_enabled() && lru_gen_folio_seq(lruvec, folio, false) == min_seq[type])
> + set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active));
> +#endif
> + }
Can't this solved much more easily by acting on the flag in the
generic LRU add/putback path? Instead of walking the list again.
Especially with Kirill's "[PATCH 0/8] mm: Remove PG_reclaim" that
removes the PG_readahead ambiguity.
Powered by blists - more mailing lists