[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0fdc06b6-747d-4f54-8a2a-1af9912e382d@redhat.com>
Date: Thu, 27 Jun 2024 08:42:22 -0400
From: Waiman Long <longman@...hat.com>
To: Hillf Danton <hdanton@...a.com>, Nicholas Piggin <npiggin@...il.com>
Cc: "Paul E . McKenney" <paulmck@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] workqueue: Improve scalability of workqueue watchdog
touch
On 6/27/24 08:16, Hillf Danton wrote:
> On Tue, Jun 25, 2024 at 09:42:45PM +1000, Nicholas Piggin wrote:
>> On a ~2000 CPU powerpc system, hard lockups have been observed in the
>> workqueue code when stop_machine runs (in this case due to CPU hotplug).
>> This is due to lots of CPUs spinning in multi_cpu_stop, calling
>> touch_nmi_watchdog() which ends up calling wq_watchdog_touch().
>> wq_watchdog_touch() writes to the global variable wq_watchdog_touched,
>> and that can find itself in the same cacheline as other important
>> workqueue data, which slows down operations to the point of lockups.
>>
>> In the case of the following abridged trace, worker_pool_idr was in
>> the hot line, causing the lockups to always appear at idr_find.
>>
> Wonder if the MCS lock does not help in this case.
This patch just tries to avoid polluting the shared cacheline leading to
excessive cacheline bouncing. No locking is involved. I am not sure what
you are thinking about using MCS lock for.
Regards,
Longman
>> watchdog: CPU 1125 self-detected hard LOCKUP @ idr_find
>> Call Trace:
>> get_work_pool
>> __queue_work
>> call_timer_fn
>> run_timer_softirq
>> __do_softirq
>> do_softirq_own_stack
>> irq_exit
>> timer_interrupt
>> decrementer_common_virt
>> * interrupt: 900 (timer) at multi_cpu_stop
>> multi_cpu_stop
>> cpu_stopper_thread
>> smpboot_thread_fn
>> kthread
>>
Powered by blists - more mailing lists