lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <z2uigo5ku5qihgdbsopodj6rblghjhg2d7q3qv2vjwsjtsar5n@6tlsyphswauq>
Date: Wed, 9 Apr 2025 18:49:47 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Waiman Long <llong@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>, 
	Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>, 
	Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org, cgroups@...r.kernel.org, 
	linux-kernel@...r.kernel.org, Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH] memcg: optimize memcg_rstat_updated

On Wed, Apr 09, 2025 at 09:20:34PM -0400, Waiman Long wrote:
> On 4/9/25 7:49 PM, Shakeel Butt wrote:
> > Currently the kernel maintains the stats updates per-memcg which is
> > needed to implement stats flushing threshold. On the update side, the
> > update is added to the per-cpu per-memcg update of the given memcg and
> > all of its ancestors. However when the given memcg has passed the
> > flushing threshold, all of its ancestors should have passed the
> > threshold as well. There is no need to traverse up the memcg tree to
> > maintain the stats updates.
> > 
> > Perf profile collected from our fleet shows that memcg_rstat_updated is
> > one of the most expensive memcg function i.e. a lot of cumulative CPU
> > is being spent on it. So, even small micro optimizations matter a lot.
> > 
> > Signed-off-by: Shakeel Butt<shakeel.butt@...ux.dev>
> > ---
> >   mm/memcontrol.c | 16 +++++++++-------
> >   1 file changed, 9 insertions(+), 7 deletions(-)
> > 
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 421740f1bcdc..ea3e40e589df 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -585,18 +585,20 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> >   	cgroup_rstat_updated(memcg->css.cgroup, cpu);
> >   	statc = this_cpu_ptr(memcg->vmstats_percpu);
> >   	for (; statc; statc = statc->parent) {
> > +		/*
> > +		 * If @memcg is already flushable then all its ancestors are
> > +		 * flushable as well and also there is no need to increase
> > +		 * stats_updates.
> > +		 */
> > +		if (!memcg_vmstats_needs_flush(statc->vmstats))
> > +			break;
> > +
> 
> Do you mean "if (memcg_vmstats_needs_flush(statc->vmstats))"?
> 

Yup you are right, thanks for catching this. I will send a v2.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ