[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201211131638.GA142813@e120877-lin.cambridge.arm.com>
Date: Fri, 11 Dec 2020 13:16:38 +0000
From: Vincent Donnefort <vincent.donnefort@....com>
To: Valentin Schneider <valentin.schneider@....com>
Cc: linux-kernel@...r.kernel.org, Qian Cai <cai@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, tglx@...utronix.de,
mingo@...nel.org, bigeasy@...utronix.de, qais.yousef@....com,
swood@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, tj@...nel.org, ouwen210@...mail.com
Subject: Re: [PATCH 2/2] workqueue: Fix affinity of kworkers attached during
late hotplug
On Fri, Dec 11, 2020 at 01:13:35PM +0000, Valentin Schneider wrote:
> On 11/12/20 12:51, Valentin Schneider wrote:
> >> In that case maybe we should check for the cpu_active_mask here too ?
> >
> > Looking at it again, I think we might need to.
> >
> > IIUC you can end up with pools bound to a single NUMA node (?). In that
> > case, say the last CPU of a node is going down, then:
> >
> > workqueue_offline_cpu()
> > wq_update_unbound_numa()
> > alloc_unbound_pwq()
> > get_unbound_pool()
> >
> > would still pick that node, because it doesn't look at the online / active
> > mask. And at this point, we would affine the
> > kworkers to that node, and we're back to having kworkers enqueued on a
> > (!active, online) CPU that is going down...
>
> Assuming a node covers at least 2 CPUs, that can't actually happen per
> is_cpu_allowed().
Yes indeed, my bad, no problem here.
Powered by blists - more mailing lists