[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <yt9dpluogfw9.fsf@linux.ibm.com>
Date: Wed, 17 Apr 2024 17:36:38 +0200
From: Sven Schnelle <svens@...ux.ibm.com>
To: Tejun Heo <tj@...nel.org>
Cc: Lai Jiangshan <jiangshanlai@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] workqueue: fix selection of wake_cpu in kick_pool()
Tejun Heo <tj@...nel.org> writes:
> On Mon, Apr 15, 2024 at 07:35:49AM +0200, Sven Schnelle wrote:
>> @@ -1277,7 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
>> !cpumask_test_cpu(p->wake_cpu, pool->attrs->__pod_cpumask)) {
>> struct work_struct *work = list_first_entry(&pool->worklist,
>> struct work_struct, entry);
>> - p->wake_cpu = cpumask_any_distribute(pool->attrs->__pod_cpumask);
>> + p->wake_cpu = cpumask_any_and_distribute(pool->attrs->__pod_cpumask,
>> + cpu_online_mask);
>
> I think this can still race with the last CPU in the pod going down and
> return nr_cpu_ids. Maybe something like the following would be better?
>
> int wake_cpu;
>
> wake_cpu = cpumask_any_distribute_and(...);
> if (wake_cpu < nr_cpus_ids) {
> p->wake_cpu = wake_cpu;
> // update stat;
> }
>
> This generally seems like a good idea but isn't this still racy? The CPU may
> go down between setting p->wake_cpu and wake_up_process().
Don't know without reading the source, but how does this code normally
protect against that?
Thanks
Sven
Powered by blists - more mailing lists