lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x45wrx26boy2junfx6wzrfgdlvhvw6gji5grreadcrobs6jvhu@o5bn2hcpxul3>
Date: Mon, 22 Jul 2024 14:32:03 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, tj@...nel.org, 
	cgroups@...r.kernel.org, hannes@...xchg.org, lizefan.x@...edance.com, longman@...hat.com, 
	kernel-team@...udflare.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by
 kswapd across NUMA nodes

On Mon, Jul 22, 2024 at 01:12:35PM GMT, Yosry Ahmed wrote:
> On Mon, Jul 22, 2024 at 1:02 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> >
> > On Fri, Jul 19, 2024 at 09:52:17PM GMT, Yosry Ahmed wrote:
> > > On Fri, Jul 19, 2024 at 3:48 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > > >
> > > > On Fri, Jul 19, 2024 at 09:54:41AM GMT, Jesper Dangaard Brouer wrote:
> > > > >
> > > > >
> > > > > On 19/07/2024 02.40, Shakeel Butt wrote:
> > > > > > Hi Jesper,
> > > > > >
> > > > > > On Wed, Jul 17, 2024 at 06:36:28PM GMT, Jesper Dangaard Brouer wrote:
> > > > > > >
> > > > > > [...]
> > > > > > >
> > > > > > >
> > > > > > > Looking at the production numbers for the time the lock is held for level 0:
> > > > > > >
> > > > > > > @locked_time_level[0]:
> > > > > > > [4M, 8M)     623 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@               |
> > > > > > > [8M, 16M)    860 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
> > > > > > > [16M, 32M)   295 |@@@@@@@@@@@@@@@@@                                   |
> > > > > > > [32M, 64M)   275 |@@@@@@@@@@@@@@@@                                    |
> > > > > > >
> > > > > >
> > > > > > Is it possible to get the above histogram for other levels as well?
> > > > >
> > > > > Data from other levels available in [1]:
> > > > >  [1]
> > > > > https://lore.kernel.org/all/8c123882-a5c5-409a-938b-cb5aec9b9ab5@kernel.org/
> > > > >
> > > > > IMHO the data shows we will get most out of skipping level-0 root-cgroup
> > > > > flushes.
> > > > >
> > > >
> > > > Thanks a lot of the data. Are all or most of these locked_time_level[0]
> > > > from kswapds? This just motivates me to strongly push the ratelimited
> > > > flush patch of mine (which would be orthogonal to your patch series).
> > >
> > > Jesper and I were discussing a better ratelimiting approach, whether
> > > it's measuring the time since the last flush, or only skipping if we
> > > have a lot of flushes in a specific time frame (using __ratelimit()).
> > > I believe this would be better than the current memcg ratelimiting
> > > approach, and we can remove the latter.
> > >
> > > WDYT?
> >
> > The last statement gives me the impression that you are trying to fix
> > something that is not broken. The current ratelimiting users are ok, the
> > issue is with the sync flushers. Or maybe you are suggesting that the new
> > ratelimiting will be used for all sync flushers and current ratelimiting
> > users and the new ratelimiting will make a good tradeoff between the
> > accuracy and potential flush stall?
> 
> The latter. Basically the idea is to have more informed and generic
> ratelimiting logic in the core rstat flushing code (e.g. using
> __ratelimit()), which would apply to ~all flushers*. Then, we ideally
> wouldn't need mem_cgroup_flush_stats_ratelimited() at all.
> 

I wonder if we really need a universal ratelimit. As you noted below
there are cases where we want exact stats and then we know there are
cases where accurate stats are not needed but they are very performance
sensitive. Aiming to have a solution which will ignore such differences
might be a futile effort.

> *The obvious exception is the force flushing case we discussed for
> cgroup_rstat_exit().
> 
> In fact, I think we need that even with the ongoing flusher
> optimization, because I think there is a slight chance that a flush is
> missed. It wouldn't be problematic for other flushers, but it
> certainly can be for cgroup_rstat_exit() as the stats will be
> completely dropped.
> 
> The scenario I have in mind is:
> - CPU 1 starts a flush of cgroup A. Flushing complete, but waiters are
> not woke up yet.
> - CPU 2 updates the stats of cgroup A after it is flushed by CPU 1.
> - CPU 3 calls cgroup_rstat_exit(), sees the ongoing flusher and waits.
> - CPU 1 wakes up the waiters.
> - CPU 3 proceeds to destroy cgroup A, and the updates made by CPU 2 are lost.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ