[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab60b720d6aef1069038bc4c52d371fb57eaa6e8.1765956025.git.zhengqi.arch@bytedance.com>
Date: Wed, 17 Dec 2025 15:27:38 +0800
From: Qi Zheng <qi.zheng@...ux.dev>
To: hannes@...xchg.org,
hughd@...gle.com,
mhocko@...e.com,
roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
david@...nel.org,
lorenzo.stoakes@...cle.com,
ziy@...dia.com,
harry.yoo@...cle.com,
imran.f.khan@...cle.com,
kamalesh.babulal@...cle.com,
axelrasmussen@...gle.com,
yuanchu@...gle.com,
weixugc@...gle.com,
chenridong@...weicloud.com,
mkoutny@...e.com,
akpm@...ux-foundation.org,
hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com,
lance.yang@...ux.dev
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
Muchun Song <songmuchun@...edance.com>,
Qi Zheng <zhengqi.arch@...edance.com>
Subject: [PATCH v2 14/28] mm: mglru: prevent memory cgroup release in mglru
From: Muchun Song <songmuchun@...edance.com>
In the near future, a folio will no longer pin its corresponding
memory cgroup. To ensure safety, it will only be appropriate to
hold the rcu read lock or acquire a reference to the memory cgroup
returned by folio_memcg(), thereby preventing it from being released.
In the current patch, the rcu read lock is employed to safeguard
against the release of the memory cgroup in mglru.
This serves as a preparatory measure for the reparenting of the
LRU pages.
Signed-off-by: Muchun Song <songmuchun@...edance.com>
Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
---
mm/vmscan.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 354b19f7365d4..814498a2c1bd6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3444,8 +3444,10 @@ static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg,
if (folio_nid(folio) != pgdat->node_id)
return NULL;
+ rcu_read_lock();
if (folio_memcg(folio) != memcg)
- return NULL;
+ folio = NULL;
+ rcu_read_unlock();
return folio;
}
@@ -4202,12 +4204,12 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
unsigned long addr = pvmw->address;
struct vm_area_struct *vma = pvmw->vma;
struct folio *folio = pfn_folio(pvmw->pfn);
- struct mem_cgroup *memcg = folio_memcg(folio);
+ struct mem_cgroup *memcg;
struct pglist_data *pgdat = folio_pgdat(folio);
- struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
- struct lru_gen_mm_state *mm_state = get_mm_state(lruvec);
- DEFINE_MAX_SEQ(lruvec);
- int gen = lru_gen_from_seq(max_seq);
+ struct lruvec *lruvec;
+ struct lru_gen_mm_state *mm_state;
+ unsigned long max_seq;
+ int gen;
lockdep_assert_held(pvmw->ptl);
VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio);
@@ -4242,6 +4244,13 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
}
}
+ rcu_read_lock();
+ memcg = folio_memcg(folio);
+ lruvec = mem_cgroup_lruvec(memcg, pgdat);
+ max_seq = READ_ONCE((lruvec)->lrugen.max_seq);
+ gen = lru_gen_from_seq(max_seq);
+ mm_state = get_mm_state(lruvec);
+
arch_enter_lazy_mmu_mode();
pte -= (addr - start) / PAGE_SIZE;
@@ -4282,6 +4291,8 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
if (mm_state && suitable_to_scan(i, young))
update_bloom_filter(mm_state, max_seq, pvmw->pmd);
+ rcu_read_unlock();
+
return true;
}
--
2.20.1
Powered by blists - more mailing lists