[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87a6w71npk.fsf@nanos.tec.linutronix.de>
Date: Tue, 27 Oct 2020 20:19:19 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Petr Mladek <pmladek@...e.com>, qiang.zhang@...driver.com
Cc: tj@...nel.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] kthread_worker: re-set CPU affinities if CPU come online
Petr,
On Tue, Oct 27 2020 at 17:39, Petr Mladek wrote:
> On Mon 2020-10-26 14:52:13, qiang.zhang@...driver.com wrote:
>> From: Zqiang <qiang.zhang@...driver.com>
>>
>> When someone CPU offlined, the 'kthread_worker' which bind this CPU,
>> will run anywhere, if this CPU online, recovery of 'kthread_worker'
>> affinity by cpuhp notifiers.
>
> I am not familiar with CPU hotplug notifiers. I rather add Thomas and
> Peter into Cc.
Thanks!
>> +static int kworker_cpu_online(unsigned int cpu, struct hlist_node *node)
>> +{
>> + struct kthread_worker *worker = hlist_entry(node, struct kthread_worker, cpuhp_node);
>
> The code here looks correct.
>
> JFYI, I was curious why many cpuhp callbacks used hlist_entry_safe().
> But they did not check for NULL. Hence the _safe() variant did
> not really prevented any crash.
>
> I seems that it was a cargo-cult programming. cpuhp_invoke_callback() calls
> simple hlist_for_each(). If I get it correctly, the operations are
> synchronized by cpus_read_lock()/cpus_write_lock() and _safe variant
> really is not needed.
Correct.
>> +static __init int kthread_worker_hotplug_init(void)
>> +{
>> + int ret;
>> +
>> + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "kthread-worker/online",
>> + kworker_cpu_online, NULL);
The dynamic hotplug states run late. What's preventing work to be queued
on such a worker before it is bound to the CPU again?
Nothing at all.
Moving the hotplug state early does not help either because this cannot
happen _before_ the CPUHP_AP_ONLINE state. After that it's already too
late because that's after interrupts have been reenabled on the upcoming
CPU. Depending on the interrupt routing an interrupt hitting the
upcoming CPU might queue work before the state is reached. Work might
also be queued via a timer before rebind happens.
The only current user (powerclamp) has it's own hotplug handling and
stops the thread and creates a new one when the CPU comes online. So
that's not a problem.
But in general this _is_ a problem. There is also no mechanism to ensure
that work on a CPU bound worker has been drained before the CPU goes
offline and that work on the outgoing CPU cannot be queued after a
certain point in the hotplug state machine.
CPU bound kernel threads have special properties. You can access per CPU
variables without further protection. This blows up in your face once
the worker thread is unbound after a hotplug operation.
So the proposed patch is duct tape and papers over the underlying design
problem.
Either this is fixed in a way which ensures operation on the bound CPU
under all circumstances or it needs to be documented that users have to
have their own hotplug handling similar to what powerclamp does.
Thanks,
tglx
Powered by blists - more mailing lists