lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1635324001.1tf9yz448t.astroid@bobo.none>
Date:   Wed, 27 Oct 2021 18:51:16 +1000
From:   Nicholas Piggin <npiggin@...il.com>
To:     benh@...nel.crashing.org, Laurent Dufour <ldufour@...ux.ibm.com>,
        mpe@...erman.id.au, paulus@...ba.org
Cc:     linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH 1/2] powerpc/watchdog: prevent printk and send IPI while
 holding the wd lock

Excerpts from Laurent Dufour's message of October 27, 2021 6:14 pm:
> Le 27/10/2021 à 05:29, Nicholas Piggin a écrit :
>> Excerpts from Laurent Dufour's message of October 27, 2021 2:27 am:
>>> When handling the Watchdog interrupt, long processing should not be done
>>> while holding the __wd_smp_lock. This prevents the other CPUs to grab it
>>> and to process Watchdog timer interrupts. Furhtermore, this could lead to
>>> the following situation:
>>>
>>> CPU x detect lockup on CPU y and grab the __wd_smp_lock
>>>        in watchdog_smp_panic()
>>> CPU y caught the watchdog interrupt and try to grab the __wd_smp_lock
>>>        in soft_nmi_interrupt()
>>> CPU x wait for CPU y to catch the IPI for 1s in __smp_send_nmi_ipi()
>> 
>> CPU y should get the IPI here if it's a NMI IPI (which will be true for
>>> = POWER9 64s).
>> 
>> That said, not all platforms support it and the console lock problem
>> seems real, so okay.
>> 
>>> CPU x will timeout and so has spent 1s waiting while holding the
>>>        __wd_smp_lock.
>>>
>>> A deadlock may also happen between the __wd_smp_lock and the console_owner
>>> 'lock' this way:
>>> CPU x grab the console_owner
>>> CPU y grab the __wd_smp_lock
>>> CPU x catch the watchdog timer interrupt and needs to grab __wd_smp_lock
>>> CPU y wants to print something and wait for console_owner
>>> -> deadlock
>>>
>>> Doing all the long processing without holding the _wd_smp_lock prevents
>>> these situations.
>> 
>> The intention was to avoid logs getting garbled e.g., if multiple
>> different CPUs fire at once.
>> 
>> I wonder if instead we could deal with that by protecting the IPI
>> sending and printing stuff with a trylock, and if you don't get the
>> trylock then just return, and you'll come back with the next timer
>> interrupt.
> 
> That sounds a bit risky to me, especially on large system when system goes 
> wrong, all the CPU may try lock here.

That should be okay though, one will get through and the others will 
skip.

> Furthermore, now operation done under the lock protection are quite fast, there 
> is no more spinning like the delay loop done when sending an IPI.
> 
> Protecting the IPI sending is a nightmare, if the target CPU is later play with 
> the lock we are taking during the IPI processing, furthermore, if the target CPU 
> is not responding the sending CPU is waiting for 1s, which slows all the system 
> due to the lock held.
> Since I do a copy of the pending CPU mask and clear it under the lock 
> protection, the IPI sending is safe while done without holding the lock.

Protecting IPI sending basically has all the same issues in the NMI
IPI layer.

> 
> Regarding the interleaved traces, I don't think this has to be managed down 
> here, but rather in the printk/console path.

It can't necessarily be because some of the problem is actually that a 
NMI handler can be interrupted by another NMI IPI because the caller
can return only after handlers start running rather than complete.

I don't think it would be an additional nightmare to trylock.

Thanks,
Nick

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ