[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <896a2d84918e4adc8a4d00d72510eb3d@huawei.com>
Date: Thu, 12 Jan 2023 20:51:04 +0000
From: Jonas Oberhauser <jonas.oberhauser@...wei.com>
To: "paulmck@...nel.org" <paulmck@...nel.org>,
"riel@...riel.com" <riel@...riel.com>,
"davej@...emonkey.org.uk" <davej@...emonkey.org.uk>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kernel-team@...a.com" <kernel-team@...a.com>
Subject: RE: [PATCH diagnostic qspinlock] Diagnostics for excessive lock-drop
wait loop time
Hi Paul,
-----Original Message-----
From: Paul E. McKenney [mailto:paulmck@...nel.org]
> We see systems stuck in the queued_spin_lock_slowpath() loop that waits for the lock to become unlocked in the case where the current CPU has set pending state.
Interesting!
Do you know if the hangs started with a recent patch? What codepaths are active (virtualization/arch/...)? Does it happen extremely rarely? Do you have any additional information?
I saw a similar situation a few years ago in a proprietary kernel, but it only happened once ever and I gave up on looking for the reason after a few days (including some time combing through the compiler generated assembler).
Have fun,
jonas
Powered by blists - more mailing lists