lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Jan 2019 16:07:34 -0500
From:   Waiman Long <longman@...hat.com>
To:     James Morse <james.morse@....com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     Zhenzhong Duan <zhenzhong.duan@...cle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        SRINIVAS <srinivas.eeda@...cle.com>
Subject: Re: Question about qspinlock nest

On 01/14/2019 08:54 AM, James Morse wrote:
> Hi Peter,
>
> On 14/01/2019 13:16, Peter Zijlstra wrote:
>> On Fri, Jan 11, 2019 at 06:32:58PM +0000, James Morse wrote:
>>> On 10/01/2019 20:12, Peter Zijlstra wrote:
>>>> On Thu, Jan 10, 2019 at 06:25:57PM +0000, James Morse wrote:
>>>> The thing is, everything non-maskable (NMI like) really should not be
>>>> using spinlocks at all.
>>>>
>>>> I otherwise have no clue about wth APEI is, but it sounds like horrible
>>>> crap ;-)
>>> I think you've called it that before!: its that GHES thing in drivers/acpi/apei.
>>>
>>> What is the alternative? bit_spin_lock()?
>>> These things can happen independently on multiple CPUs. On arm64 these NMIlike
>>> things don't affect all CPUs like they seem to on x86.
>> It has nothing to do with how many CPUs are affected. It has everything
>> to do with not being maskable.
> (sorry, I didn't include any of the context, let me back-up a bit here:)
>
>> What avoids the trivial self-recursion:
>>
>>   spin_lock(&)
>>   <NMI>
>>     spin_lock(&x)
>>      ... wait forever more ...
>>   </NMI>
>>   spin_unlock(&x)
>>
>> ?
> If its trying to take the same lock, I agree its deadlocked.
> If the sequence above started with <NMI>, I agree its deadlocked.
>
> APEI/GHES is doing neither of these things. It take a lock that is only ever
> taken in_nmi(). nmi_enter()s BUG_ON(in_nmi()) means these never become re-entrant.
>
> What is the lock doing? Protecting the 'NMI' fixmap slot in the unlikely case
> that two CPUs end up in here at the same time.
>
> (I though x86's NMI masked NMI until the next iret?)
>
>
> This is murkier on arm64 as we have multiple things that behave like this, but
> there is an order to them, and none of them can interrupt themselves.
> e.g. We can't take an SError during the SError handler.
>
> But we can take this SError/NMI on another CPU while the first one is still
> running the handler.
>
> These multiple NMIlike notifications mean having multiple locks/fixmap-slots,
> one per notification. This is where the qspinlock node limit comes in, as we
> could have more than 4 contexts.

Still it will be extremely unlikely to have more than 4 nested spinlock
acquisitions with contention. Do you think it will be helpful to make
the MAX_NODES parameter configurable to either 4 or 8? For x86, I think
we can live with 4. On arm64, we can opt for 8 if you think there is a
decent chance that more than 4 could be needed under certain
circumstances. This will, of course, reduce the max NR_CPUS by half.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ