[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zh8EfxdVdiIj_27H@slm.duckdns.org>
Date: Tue, 16 Apr 2024 13:06:39 -1000
From: Tejun Heo <tj@...nel.org>
To: Sven Schnelle <svens@...ux.ibm.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] workqueue: fix selection of wake_cpu in kick_pool()
Hello,
On Mon, Apr 15, 2024 at 07:35:49AM +0200, Sven Schnelle wrote:
> @@ -1277,7 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
> !cpumask_test_cpu(p->wake_cpu, pool->attrs->__pod_cpumask)) {
> struct work_struct *work = list_first_entry(&pool->worklist,
> struct work_struct, entry);
> - p->wake_cpu = cpumask_any_distribute(pool->attrs->__pod_cpumask);
> + p->wake_cpu = cpumask_any_and_distribute(pool->attrs->__pod_cpumask,
> + cpu_online_mask);
I think this can still race with the last CPU in the pod going down and
return nr_cpu_ids. Maybe something like the following would be better?
int wake_cpu;
wake_cpu = cpumask_any_distribute_and(...);
if (wake_cpu < nr_cpus_ids) {
p->wake_cpu = wake_cpu;
// update stat;
}
This generally seems like a good idea but isn't this still racy? The CPU may
go down between setting p->wake_cpu and wake_up_process().
Thanks.
--
tejun
Powered by blists - more mailing lists