[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhjmtyktri8.mognet@arm.com>
Date: Fri, 11 Dec 2020 13:13:35 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Donnefort <vincent.donnefort@....com>
Cc: linux-kernel@...r.kernel.org, Qian Cai <cai@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, tglx@...utronix.de,
mingo@...nel.org, bigeasy@...utronix.de, qais.yousef@....com,
swood@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, tj@...nel.org, ouwen210@...mail.com
Subject: Re: [PATCH 2/2] workqueue: Fix affinity of kworkers attached during late hotplug
On 11/12/20 12:51, Valentin Schneider wrote:
>> In that case maybe we should check for the cpu_active_mask here too ?
>
> Looking at it again, I think we might need to.
>
> IIUC you can end up with pools bound to a single NUMA node (?). In that
> case, say the last CPU of a node is going down, then:
>
> workqueue_offline_cpu()
> wq_update_unbound_numa()
> alloc_unbound_pwq()
> get_unbound_pool()
>
> would still pick that node, because it doesn't look at the online / active
> mask. And at this point, we would affine the
> kworkers to that node, and we're back to having kworkers enqueued on a
> (!active, online) CPU that is going down...
Assuming a node covers at least 2 CPUs, that can't actually happen per
is_cpu_allowed().
Powered by blists - more mailing lists