[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <llwoiz4k5l44z2dyo6oubcflfarhep654cr5tvcrnkltbw4eni@kxywzukbgyha>
Date: Wed, 24 Dec 2025 00:43:00 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Qi Zheng <qi.zheng@...ux.dev>, hannes@...xchg.org, hughd@...gle.com,
mhocko@...e.com, roman.gushchin@...ux.dev, muchun.song@...ux.dev,
david@...nel.org, lorenzo.stoakes@...cle.com, ziy@...dia.com, harry.yoo@...cle.com,
imran.f.khan@...cle.com, kamalesh.babulal@...cle.com, axelrasmussen@...gle.com,
yuanchu@...gle.com, weixugc@...gle.com, chenridong@...weicloud.com, mkoutny@...e.com,
akpm@...ux-foundation.org, hamzamahfooz@...ux.microsoft.com, apais@...ux.microsoft.com,
lance.yang@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v2 00/28] Eliminate Dying Memory Cgroup
On Tue, Dec 23, 2025 at 04:36:18PM -0800, Shakeel Butt wrote:
> On Wed, Dec 24, 2025 at 12:07:50AM +0000, Yosry Ahmed wrote:
> > On Tue, Dec 23, 2025 at 03:20:47PM -0800, Shakeel Butt wrote:
> > > On Tue, Dec 23, 2025 at 08:04:50PM +0000, Yosry Ahmed wrote:
> > > [...]
> > > >
> > > > I think there might be a problem with non-hierarchical stats on cgroup
> > > > v1, I brought it up previously [*]. I am not sure if this was addressed
> > > > but I couldn't immediately find anything.
> > >
> > > Sigh, the curse of memcg-v1. Let's see what we can do to not break v1.
> > >
> > > >
> > > > In short, if memory is charged to a dying cgroup
> > >
> > > Not sure why stats updates for dying cgroup is related. Isn't it simply
> > > stat increase at the child memcg and then stat decrease at the parent
> > > memcg would possibly show negative stat_local of the parent.
> >
> > Hmm not sure I understand what you mean here. Normally an update to the
> > child memcg should not update state_local of the parent. So outside the
> > context of dying cgroups and reparenting I don't see how state_local of
> > the parent can become negative.
>
> We might be talking about same thing. When you said "memory is charged
> to a dying cgroup", it does not have to be dying cgroup, it can be alive
> child which died later. So to give an example, let's say a process in
> the child allocates a file page, NR_FILE_PAGES is increased at the
> child and next child has been rmdir'ed, so when that specific file page
> is freed, the NR_FILE_PAGES will be decreased at the parent after this
> series.
Yes this is exactly what I mean. Specifically, an update happens after
the cgroup becomes "dying".
>
> >
> > >
> > > > at the time of
> > > > reparenting, when the memory gets uncharged the stats updates will occur
> > > > at the parent. This will update both hierarchical and non-hierarchical
> > > > stats of the parent, which would corrupt the parent's non-hierarchical
> > > > stats (because those counters were never incremented when the memory was
> > > > charged).
> > > >
> > > > I didn't track down which stats are affected by this, but off the top of
> > > > my head I think all stats tracking anon, file, etc.
> > >
> > > Let's start with what specific stats might be effected. First the stats
> > > which are monotonically increasing should be fine, like
> > > WORKINGSET_REFAULT_[ANON|FILE], PGPG[IN|OUT], PG[MAJ]FAULT.
> > >
> > > So, the following ones are the interesting ones:
> > >
> > > NR_FILE_PAGES, NR_ANON_MAPPED, NR_ANON_THPS, NR_SHMEM, NR_FILE_MAPPED,
> > > NR_FILE_DIRTY, NR_WRITEBACK, MEMCG_SWAP, NR_SWAPCACHE.
> > >
> > > >
> > > > The obvious solution is to flush and reparent the stats of a dying memcg
> > > > during reparenting,
> > >
> > > Again not sure how flushing will help here and what do you mean by
> > > 'reparent the stats'? Do you mean something like:
> >
> > Oh I meant we just need to do an rstat flush to aggregate per-CPU
> > counters before moving the stats from child to parent.
> >
> > >
> > > parent->vmstats->state_local += child->vmstats->state_local;
> > >
> > > Hmm this seems fine and I think it should work.
> >
> > Something like that, I didn't look too closely if there's anything else
> > that needs to be reparented.
> >
> > >
> > > > but I don't think this entirely fixes the problem
> > > > because the dying memcg stats can still be updated after its reparenting
> > > > (e.g. if a ref to the memcg has been held since before reparenting).
> > >
> > > How can dying memcg stats can still be updated after reparenting? The
> > > stats which we care about are the anon & file memory and this series is
> > > reparenting them, so dying memcg will not see stats updates unless there
> > > is a concurrent update happening and I think it is very easy to avoid
> > > such situation by putting a grace period between reparenting the
> > > file/anon folios and reparenting dying chils'd stats_local. Am I missing
> > > something?
> >
> > What prevents the code from obtaining a ref to a parent's memcg
>
> I think you meant child's memcg here.
Yes, sorry.
>
> > before
> > reparenting, and using it to update the stats after reparenting? A grace
> > period only works if the entire scope of using the memcg is within the
> > RCU critical section.
>
> Yeah this is an issue.
>
> >
> > For example, __mem_cgroup_try_charge_swap() currently does this when
> > incrementing MEMCG_SWAP. While this specific example isn't problematic
> > because the reference won't be dropped until MEMCG_SWAP is decremented
> > again, the pattern of grabbing a ref to the memcg then updating a stat
> > could generally cause the problem.
> >
> > Most stats are updated using lruvec_stat_mod_folio(), which updates the
> > stats in the same RCU critical section as obtaining the memcg pointer
> > from the folio, so it can be fixed with a grace period. However, I think
> > it can be easily missed in the future if other code paths update memcg
> > stats in a different way. We should try to enforce that stat updates
> > cannot only happen from the same RCU critical section where the memcg
> > pointer is acquired.
>
> The core stats update functions are mod_memcg_state() and
> mod_memcg_lruvec_state(). If for v1 only, we add additional check for
> CSS_DYING and go to parent if CSS_DYING is set then shouldn't we avoid
> this issue?
But this is still racy, right? The cgroup could become dying right after
we check CSS_DYING, no?
Powered by blists - more mailing lists