[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ejywj2fho37z4zdtgvryxzsztgtdrfop4ekenee4fewholyugq@xrbvtg5ui3ty>
Date: Fri, 19 Dec 2025 17:11:36 -0800
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Qi Zheng <qi.zheng@...ux.dev>, hughd@...gle.com, mhocko@...e.com,
roman.gushchin@...ux.dev, muchun.song@...ux.dev, david@...nel.org,
lorenzo.stoakes@...cle.com, ziy@...dia.com, harry.yoo@...cle.com, imran.f.khan@...cle.com,
kamalesh.babulal@...cle.com, axelrasmussen@...gle.com, yuanchu@...gle.com, weixugc@...gle.com,
chenridong@...weicloud.com, mkoutny@...e.com, akpm@...ux-foundation.org,
hamzamahfooz@...ux.microsoft.com, apais@...ux.microsoft.com, lance.yang@...ux.dev,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v2 17/28] mm: thp: prevent memory cgroup release in
folio_split_queue_lock{_irqsave}()
On Wed, Dec 17, 2025 at 05:27:17PM -0500, Johannes Weiner wrote:
> On Wed, Dec 17, 2025 at 03:27:41PM +0800, Qi Zheng wrote:
> > From: Qi Zheng <zhengqi.arch@...edance.com>
> >
> > In the near future, a folio will no longer pin its corresponding memory
> > cgroup. To ensure safety, it will only be appropriate to hold the rcu read
> > lock or acquire a reference to the memory cgroup returned by
> > folio_memcg(), thereby preventing it from being released.
> >
> > In the current patch, the rcu read lock is employed to safeguard against
> > the release of the memory cgroup in folio_split_queue_lock{_irqsave}().
> >
> > Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
> > Reviewed-by: Harry Yoo <harry.yoo@...cle.com>
> > ---
> > mm/huge_memory.c | 16 ++++++++++++++--
> > 1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 12b46215b30c1..b9e6855ec0b6a 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1154,13 +1154,25 @@ split_queue_lock_irqsave(int nid, struct mem_cgroup *memcg, unsigned long *flags
> >
> > static struct deferred_split *folio_split_queue_lock(struct folio *folio)
> > {
> > - return split_queue_lock(folio_nid(folio), folio_memcg(folio));
> > + struct deferred_split *queue;
> > +
> > + rcu_read_lock();
> > + queue = split_queue_lock(folio_nid(folio), folio_memcg(folio));
> > + rcu_read_unlock();
>
> Ah, the memcg destruction path is acquiring the split queue lock for
> reparenting. Once you have it locked, it's safe to drop the rcu lock.
Qi, please add the above explanation in a comment and with that:
Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
Powered by blists - more mailing lists