[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X9oadQJYQH8ss00Z@mtj.duckdns.org>
Date: Wed, 16 Dec 2020 09:32:21 -0500
From: Tejun Heo <tj@...nel.org>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: linux-kernel@...r.kernel.org,
Lai Jiangshan <laijs@...ux.alibaba.com>,
Peter Zijlstra <peterz@...radead.org>,
Valentin Schneider <valentin.schneider@....com>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: Re: [PATCH 02/10] workqueue: use cpu_possible_mask instead of
cpu_active_mask to break affinity
Hello,
On Mon, Dec 14, 2020 at 11:54:49PM +0800, Lai Jiangshan wrote:
> @@ -4909,8 +4909,9 @@ static void unbind_workers(int cpu)
>
> raw_spin_unlock_irq(&pool->lock);
>
> + /* don't rely on the scheduler to force break affinity for us. */
I'm not sure this comment is helpful. The comment may make sense right now
while the scheduler behavior is changing but down the line it's not gonna
make whole lot of sense.
> for_each_pool_worker(worker, pool)
> - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
> + WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
>
> mutex_unlock(&wq_pool_attach_mutex);
Thanks.
--
tejun
Powered by blists - more mailing lists