[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2FwVX42LIKXSTz3@slm.duckdns.org>
Date:   Tue, 1 Nov 2022 09:15:33 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Josh Don <joshdon@...gle.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        linux-kernel@...r.kernel.org,
        Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth
Hello,
On Tue, Nov 01, 2022 at 12:11:30PM -0700, Josh Don wrote:
> > Just to better understand the situation, can you give some more details on
> > the scenarios where cgroup_mutex was in the middle of a shitshow?
> 
> There have been a couple, I think one of the main ones has been writes
> to cgroup.procs. cpuset modifications also show up since there's a
> mutex there.
If you can, I'd really like to learn more about the details. We've had some
issues with the threadgroup_rwsem because it's such a big hammer but not
necessarily with cgroup_mutex because they are only used in maintenance
operations and never from any hot paths.
Regarding threadgroup_rwsem, w/ CLONE_INTO_CGROUP (userspace support is
still missing unfortunately), the usual worfklow of creating a cgroup,
seeding it with a process and then later shutting it down doesn't involve
threadgroup_rwsem at all, so most of the problems should go away in the
hopefully near future.
Thanks.
-- 
tejun
Powered by blists - more mailing lists
 
