lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJD7tkY1B9M6A8jHRuw4H8R95S9V4j_BkSQkDnr87_Tir+7VAA@mail.gmail.com>
Date: Mon, 22 Jul 2024 23:24:14 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, tj@...nel.org, cgroups@...r.kernel.org, 
	hannes@...xchg.org, lizefan.x@...edance.com, longman@...hat.com, 
	kernel-team@...udflare.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by
 kswapd across NUMA nodes

On Mon, Jul 22, 2024 at 3:59 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> On Mon, Jul 22, 2024 at 02:32:03PM GMT, Shakeel Butt wrote:
> > On Mon, Jul 22, 2024 at 01:12:35PM GMT, Yosry Ahmed wrote:
> > > On Mon, Jul 22, 2024 at 1:02 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > > >
> > > > On Fri, Jul 19, 2024 at 09:52:17PM GMT, Yosry Ahmed wrote:
> > > > > On Fri, Jul 19, 2024 at 3:48 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > > > > >
> > > > > > On Fri, Jul 19, 2024 at 09:54:41AM GMT, Jesper Dangaard Brouer wrote:
> > > > > > >
> > > > > > >
> > > > > > > On 19/07/2024 02.40, Shakeel Butt wrote:
> > > > > > > > Hi Jesper,
> > > > > > > >
> > > > > > > > On Wed, Jul 17, 2024 at 06:36:28PM GMT, Jesper Dangaard Brouer wrote:
> > > > > > > > >
> > > > > > > > [...]
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Looking at the production numbers for the time the lock is held for level 0:
> > > > > > > > >
> > > > > > > > > @locked_time_level[0]:
> > > > > > > > > [4M, 8M)     623 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@               |
> > > > > > > > > [8M, 16M)    860 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
> > > > > > > > > [16M, 32M)   295 |@@@@@@@@@@@@@@@@@                                   |
> > > > > > > > > [32M, 64M)   275 |@@@@@@@@@@@@@@@@                                    |
> > > > > > > > >
> > > > > > > >
> > > > > > > > Is it possible to get the above histogram for other levels as well?
> > > > > > >
> > > > > > > Data from other levels available in [1]:
> > > > > > >  [1]
> > > > > > > https://lore.kernel.org/all/8c123882-a5c5-409a-938b-cb5aec9b9ab5@kernel.org/
> > > > > > >
> > > > > > > IMHO the data shows we will get most out of skipping level-0 root-cgroup
> > > > > > > flushes.
> > > > > > >
> > > > > >
> > > > > > Thanks a lot of the data. Are all or most of these locked_time_level[0]
> > > > > > from kswapds? This just motivates me to strongly push the ratelimited
> > > > > > flush patch of mine (which would be orthogonal to your patch series).
> > > > >
> > > > > Jesper and I were discussing a better ratelimiting approach, whether
> > > > > it's measuring the time since the last flush, or only skipping if we
> > > > > have a lot of flushes in a specific time frame (using __ratelimit()).
> > > > > I believe this would be better than the current memcg ratelimiting
> > > > > approach, and we can remove the latter.
> > > > >
> > > > > WDYT?
> > > >
> > > > The last statement gives me the impression that you are trying to fix
> > > > something that is not broken. The current ratelimiting users are ok, the
> > > > issue is with the sync flushers. Or maybe you are suggesting that the new
> > > > ratelimiting will be used for all sync flushers and current ratelimiting
> > > > users and the new ratelimiting will make a good tradeoff between the
> > > > accuracy and potential flush stall?
> > >
> > > The latter. Basically the idea is to have more informed and generic
> > > ratelimiting logic in the core rstat flushing code (e.g. using
> > > __ratelimit()), which would apply to ~all flushers*. Then, we ideally
> > > wouldn't need mem_cgroup_flush_stats_ratelimited() at all.
> > >
> >
> > I wonder if we really need a universal ratelimit. As you noted below
> > there are cases where we want exact stats and then we know there are
> > cases where accurate stats are not needed but they are very performance
> > sensitive. Aiming to have a solution which will ignore such differences
> > might be a futile effort.
> >
>
> BTW I am not against it. If we can achieve this with minimal regression
> and maintainence burden then it would be preferable.

It is possible that it is a futile effort, but if it works, the memcg
flushing interface will be much better and we don't have to evaluate
whether ratelimiting is needed on a case-by-case basis.

According to Jesper's data, allowing a flush every 50ms at most may be
reasonable, which means we can ratelimit the flushes to 20 flushers
per second or similar. I think on average, this should provide enough
accuracy for most use cases, and it should also reduce flushes in the
cases that Jesper presented.

It's probably worth a try, especially that it does not involve
changing user visible ABIs so we can always go back to what we have
today.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ