lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29Nua8ZsDfhY+x+VfYDkbkjfXLXTZ5JMVR9uiBygraxDM+g@mail.gmail.com>
Date:   Tue, 1 Nov 2022 13:56:29 -0700
From:   Josh Don <joshdon@...gle.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        linux-kernel@...r.kernel.org,
        Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth

On Tue, Nov 1, 2022 at 12:15 PM Tejun Heo <tj@...nel.org> wrote:
>
> Hello,
>
> On Tue, Nov 01, 2022 at 12:11:30PM -0700, Josh Don wrote:
> > > Just to better understand the situation, can you give some more details on
> > > the scenarios where cgroup_mutex was in the middle of a shitshow?
> >
> > There have been a couple, I think one of the main ones has been writes
> > to cgroup.procs. cpuset modifications also show up since there's a
> > mutex there.
>
> If you can, I'd really like to learn more about the details. We've had some
> issues with the threadgroup_rwsem because it's such a big hammer but not
> necessarily with cgroup_mutex because they are only used in maintenance
> operations and never from any hot paths.
>
> Regarding threadgroup_rwsem, w/ CLONE_INTO_CGROUP (userspace support is
> still missing unfortunately), the usual worfklow of creating a cgroup,
> seeding it with a process and then later shutting it down doesn't involve
> threadgroup_rwsem at all, so most of the problems should go away in the
> hopefully near future.

Maybe walking through an example would be helpful? I don't know if
there's anything super specific. For cgroup_mutex for example, the
same global mutex is being taken for things like cgroup mkdir and
cgroup proc attach, regardless of which part of the hierarchy is being
modified. So, we end up sharing that mutex between random job threads
(ie. that may be manipulating their own cgroup sub-hierarchy), and
control plane threads, which are attempting to manage root-level
cgroups. Bad things happen when the cgroup_mutex (or similar) is held
by a random thread which blocks and is of low scheduling priority,
since when it wakes back up it may take quite a while for it to run
again (whether that low priority be due to CFS bandwidth, sched_idle,
or even just O(hundreds) of threads on a cpu). Starving out the
control plane causes us significant issues, since that affects machine
health. cgroup manipulation is not a hot path operation, but the
control plane tends to hit it fairly often, and so those things
combine at our scale to produce this rare problem.

>
> Thanks.
>
> --
> tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ