[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X9egDheiQPLdR0IS@hirez.programming.kicks-ass.net>
Date: Mon, 14 Dec 2020 18:25:34 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: linux-kernel@...r.kernel.org,
Lai Jiangshan <laijs@...ux.alibaba.com>,
Tejun Heo <tj@...nel.org>,
Valentin Schneider <valentin.schneider@....com>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: Re: [PATCH 02/10] workqueue: use cpu_possible_mask instead of
cpu_active_mask to break affinity
On Mon, Dec 14, 2020 at 11:54:49PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@...ux.alibaba.com>
>
> There might be other CPU online. The workers losing binding on its CPU
> should have chance to work on those later onlined CPUs.
>
> Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
> Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
> ---
> kernel/workqueue.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index aba71ab359dd..1f5b8385c0cf 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
>
> raw_spin_unlock_irq(&pool->lock);
>
> + /* don't rely on the scheduler to force break affinity for us. */
> for_each_pool_worker(worker, pool)
> - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
> + WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
Please explain this one.. it's not making sense. Also the Changelog
doesn't seem remotely related to the actual change.
Afaict this is actively wrong.
Also, can you please not Cc me parts of a series? That's bloody
annoying.
Powered by blists - more mailing lists