[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YGyJHAlLKqng2WeS@slm.duckdns.org>
Date: Tue, 6 Apr 2021 12:15:24 -0400
From: Tejun Heo <tj@...nel.org>
To: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Quentin Perret <qperret@...gle.com>, Wei Wang <wvw@...gle.com>
Subject: Re: [PATCH] cgroup: Relax restrictions on kernel threads moving out
of root cpu cgroup
Hello,
On Tue, Apr 06, 2021 at 08:57:15PM +0530, Pavan Kondeti wrote:
> Yeah. The workqueue attrs comes in handy to reduce the nice/prio of a
> background workqueue if we identify that it is cpu intensive. However, this
> needs case by case analysis, tweaking etc. If there is no other alternative,
> we might end up chasing the background workers and reduce their nice value.
There shouldn't be that many workqueues that consume a lot of cpu cycles.
The usual culprit is kswapd, IO related stuff (writeback, encryption), so it
shouldn't be a long list and we want them identified anyway.
> The problem at our hand (which you might be knowing already) is that, lets say
> we have 2 cgroups in our setup and we want to prioritize UX over background.
> IOW, reduce the cpu.shares of background cgroup. This helps in prioritizing
> Task-A and Task-B over Task-X and Task-Y. However, each individual kworker
> can potentially have CPU share equal to the entire UX cgroup.
>
> -kworker/0:0
> -kworker/1:0
> - UX
> ----Task-A
> ----Task-B
> - background
> ----Task-X
> ----Task-Y
> Hence we want to move all kernel threads to another cgroup so that this cgroup
> will have CPU share equal to UX.
>
> The patch presented here allows us to create the above setup. Any other
> alternative approaches to achieve the same without impacting any future
> designs/requirements?
Not quite the same but we already have
/sys/devices/virtual/workqueue/cpumask which affects all unbound workqueues,
so maybe a global default priority knob would help here?
Thanks.
--
tejun
Powered by blists - more mailing lists