lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <utl6esq7jz5e4f7kwgrpwdjc2rm3yi33ljb6xkm5nxzufa4o7s@hblq2piu3vnz>
Date: Wed, 24 Dec 2025 00:07:50 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Qi Zheng <qi.zheng@...ux.dev>, hannes@...xchg.org, hughd@...gle.com, 
	mhocko@...e.com, roman.gushchin@...ux.dev, muchun.song@...ux.dev, 
	david@...nel.org, lorenzo.stoakes@...cle.com, ziy@...dia.com, harry.yoo@...cle.com, 
	imran.f.khan@...cle.com, kamalesh.babulal@...cle.com, axelrasmussen@...gle.com, 
	yuanchu@...gle.com, weixugc@...gle.com, chenridong@...weicloud.com, mkoutny@...e.com, 
	akpm@...ux-foundation.org, hamzamahfooz@...ux.microsoft.com, apais@...ux.microsoft.com, 
	lance.yang@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	cgroups@...r.kernel.org, Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v2 00/28] Eliminate Dying Memory Cgroup

On Tue, Dec 23, 2025 at 03:20:47PM -0800, Shakeel Butt wrote:
> On Tue, Dec 23, 2025 at 08:04:50PM +0000, Yosry Ahmed wrote:
> [...]
> > 
> > I think there might be a problem with non-hierarchical stats on cgroup
> > v1, I brought it up previously [*]. I am not sure if this was addressed
> > but I couldn't immediately find anything.
> 
> Sigh, the curse of memcg-v1. Let's see what we can do to not break v1.
> 
> > 
> > In short, if memory is charged to a dying cgroup 
> 
> Not sure why stats updates for dying cgroup is related. Isn't it simply
> stat increase at the child memcg and then stat decrease at the parent
> memcg would possibly show negative stat_local of the parent.

Hmm not sure I understand what you mean here. Normally an update to the
child memcg should not update state_local of the parent. So outside the
context of dying cgroups and reparenting I don't see how state_local of
the parent can become negative.

> 
> > at the time of
> > reparenting, when the memory gets uncharged the stats updates will occur
> > at the parent. This will update both hierarchical and non-hierarchical
> > stats of the parent, which would corrupt the parent's non-hierarchical
> > stats (because those counters were never incremented when the memory was
> > charged).
> > 
> > I didn't track down which stats are affected by this, but off the top of
> > my head I think all stats tracking anon, file, etc.
> 
> Let's start with what specific stats might be effected. First the stats
> which are monotonically increasing should be fine, like
> WORKINGSET_REFAULT_[ANON|FILE], PGPG[IN|OUT], PG[MAJ]FAULT.
> 
> So, the following ones are the interesting ones:
> 
> NR_FILE_PAGES, NR_ANON_MAPPED, NR_ANON_THPS, NR_SHMEM, NR_FILE_MAPPED,
> NR_FILE_DIRTY, NR_WRITEBACK, MEMCG_SWAP, NR_SWAPCACHE.
> 
> > 
> > The obvious solution is to flush and reparent the stats of a dying memcg
> > during reparenting,
> 
> Again not sure how flushing will help here and what do you mean by
> 'reparent the stats'? Do you mean something like:

Oh I meant we just need to do an rstat flush to aggregate per-CPU
counters before moving the stats from child to parent.

> 
> parent->vmstats->state_local += child->vmstats->state_local;
> 
> Hmm this seems fine and I think it should work.

Something like that, I didn't look too closely if there's anything else
that needs to be reparented.

> 
> > but I don't think this entirely fixes the problem
> > because the dying memcg stats can still be updated after its reparenting
> > (e.g. if a ref to the memcg has been held since before reparenting).
> 
> How can dying memcg stats can still be updated after reparenting? The
> stats which we care about are the anon & file memory and this series is
> reparenting them, so dying memcg will not see stats updates unless there
> is a concurrent update happening and I think it is very easy to avoid
> such situation by putting a grace period between reparenting the
> file/anon folios and reparenting dying chils'd stats_local. Am I missing
> something?

What prevents the code from obtaining a ref to a parent's memcg before
reparenting, and using it to update the stats after reparenting? A grace
period only works if the entire scope of using the memcg is within the
RCU critical section.

For example, __mem_cgroup_try_charge_swap() currently does this when
incrementing MEMCG_SWAP. While this specific example isn't problematic
because the reference won't be dropped until MEMCG_SWAP is decremented
again, the pattern of grabbing a ref to the memcg then updating a stat
could generally cause the problem.

Most stats are updated using lruvec_stat_mod_folio(), which updates the
stats in the same RCU critical section as obtaining the memcg pointer
from the folio, so it can be fixed with a grace period. However, I think
it can be easily missed in the future if other code paths update memcg
stats in a different way. We should try to enforce that stat updates
cannot only happen from the same RCU critical section where the memcg
pointer is acquired.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ