[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250415024532.26632-21-songmuchun@bytedance.com>
Date: Tue, 15 Apr 2025 10:45:24 +0800
From: Muchun Song <songmuchun@...edance.com>
To: hannes@...xchg.org,
mhocko@...nel.org,
roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
akpm@...ux-foundation.org,
david@...morbit.com,
zhengqi.arch@...edance.com,
yosry.ahmed@...ux.dev,
nphamcs@...il.com,
chengming.zhou@...ux.dev
Cc: linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
linux-mm@...ck.org,
hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com,
Muchun Song <songmuchun@...edance.com>
Subject: [PATCH RFC 20/28] mm: workingset: prevent lruvec release in workingset_refault()
In the near future, a folio will no longer pin its corresponding
memory cgroup. So an lruvec returned by folio_lruvec() could be
released without the rcu read lock or a reference to its memory
cgroup.
In the current patch, the rcu read lock is employed to safeguard
against the release of the lruvec in workingset_refault().
This serves as a preparatory measure for the reparenting of the
LRU pages.
Signed-off-by: Muchun Song <songmuchun@...edance.com>
---
mm/workingset.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/workingset.c b/mm/workingset.c
index e14b9e33f161..ef89d18cb8cf 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -560,11 +560,12 @@ void workingset_refault(struct folio *folio, void *shadow)
* locked to guarantee folio_memcg() stability throughout.
*/
nr = folio_nr_pages(folio);
+ rcu_read_lock();
lruvec = folio_lruvec(folio);
mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
if (!workingset_test_recent(shadow, file, &workingset, true))
- return;
+ goto out;
folio_set_active(folio);
workingset_age_nonresident(lruvec, nr);
@@ -580,6 +581,8 @@ void workingset_refault(struct folio *folio, void *shadow)
lru_note_cost_refault(folio);
mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
}
+out:
+ rcu_read_unlock();
}
/**
--
2.20.1
Powered by blists - more mailing lists