[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6bc5dbc3-2cdd-5cb8-1632-11de2008a85a@igalia.com>
Date: Thu, 1 Sep 2022 13:24:46 -0300
From: "Guilherme G. Piccoli" <gpiccoli@...lia.com>
To: Greg KH <gregkh@...uxfoundation.org>, evgreen@...omium.org
Cc: arnd@...db.de, linux-efi@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel@...ccoli.net, ardb@...nel.org,
davidgow@...gle.com, jwerner@...omium.org,
Petr Mladek <pmladek@...e.com>
Subject: Re: [PATCH V3] firmware: google: Test spinlock on panic path to avoid
lockups
On 01/09/2022 13:04, Greg KH wrote:
> [...]
>>> What happens if the lock is grabbed right after testing for it?
>>> Shouldn't you use lockdep_assert_held() instead as the documentation
>>> says to?
>>
>> How, if in this point only a single CPU (this one, executing the code)
>> is running?
>
> How are we supposed to know this here?
>
Reading the code?
Or you mean, in the commit description this should be mentioned?
I can do that, if you prefer.
>> other CPUs, except this one executing the code. So, either the lock was
>> taken (and we bail), or it wasn't and it's safe to continue.
>
> Then who else could have taken the lock? And if all other CPUs are
> stopped, who cares about the lock at all? Just don't grab it (you
> should check for that when you want to grab it) and then you can work
> properly at that point in time.
>
I don't think it is so simple - we are in the panic path.
So, imagine the lock was taken in CPU0, where GSMI is doing some
operation. During that operation, CPU1 panics!
When that happens, panic() executes in CPU1, disabling CPU0 through
"strong" mechanisms (NMI). So, CPU0 had the lock, it is now off, and
when CPU1 goes through the panic notifiers, it'll eventually wait
forever for this lock in the GSMI handler, unless we have this patch
that would prevent the handler to run in such case.
Makes sense?
Powered by blists - more mailing lists