lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D3R7YDW8U4QJ.1ZC4SPQN5SY1G@gmail.com>
Date: Wed, 28 Aug 2024 13:19:46 +1000
From: "Nicholas Piggin" <npiggin@...il.com>
To: "Nysal Jan K.A." <nysal@...ux.ibm.com>, "Michael Ellerman"
 <mpe@...erman.id.au>
Cc: <stable@...r.kernel.org>, "Geetika Moolchandani"
 <geetika@...ux.ibm.com>, "Vaishnavi Bhat" <vaish123@...ibm.com>, "Jijo
 Varghese" <vargjijo@...ibm.com>, "Christophe Leroy"
 <christophe.leroy@...roup.eu>, "Naveen N Rao" <naveen@...nel.org>,
 <linuxppc-dev@...ts.ozlabs.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] powerpc/qspinlock: Fix deadlock in MCS queue

Hey Nysal,

This is really good debugging, and a nice write up.

On Mon Aug 26, 2024 at 6:12 PM AEST, Nysal Jan K.A. wrote:
> If an interrupt occurs in queued_spin_lock_slowpath() after we increment
> qnodesp->count and before node->lock is initialized, another CPU might
> see stale lock values in get_tail_qnode(). If the stale lock value happens
> to match the lock on that CPU, then we write to the "next" pointer of
> the wrong qnode. This causes a deadlock as the former CPU, once it becomes
> the head of the MCS queue, will spin indefinitely until it's "next" pointer
> is set by its successor in the queue. This results in lockups similar to
> the following.
>
>    watchdog: CPU 15 Hard LOCKUP
>    ......
>    NIP [c0000000000b78f4] queued_spin_lock_slowpath+0x1184/0x1490
>    LR [c000000001037c5c] _raw_spin_lock+0x6c/0x90
>    Call Trace:
>     0xc000002cfffa3bf0 (unreliable)
>     _raw_spin_lock+0x6c/0x90
>     raw_spin_rq_lock_nested.part.135+0x4c/0xd0
>     sched_ttwu_pending+0x60/0x1f0
>     __flush_smp_call_function_queue+0x1dc/0x670
>     smp_ipi_demux_relaxed+0xa4/0x100
>     xive_muxed_ipi_action+0x20/0x40
>     __handle_irq_event_percpu+0x80/0x240
>     handle_irq_event_percpu+0x2c/0x80
>     handle_percpu_irq+0x84/0xd0
>     generic_handle_irq+0x54/0x80
>     __do_irq+0xac/0x210
>     __do_IRQ+0x74/0xd0
>     0x0
>     do_IRQ+0x8c/0x170
>     hardware_interrupt_common_virt+0x29c/0x2a0
>    --- interrupt: 500 at queued_spin_lock_slowpath+0x4b8/0x1490
>    ......
>    NIP [c0000000000b6c28] queued_spin_lock_slowpath+0x4b8/0x1490
>    LR [c000000001037c5c] _raw_spin_lock+0x6c/0x90
>    --- interrupt: 500
>     0xc0000029c1a41d00 (unreliable)
>     _raw_spin_lock+0x6c/0x90
>     futex_wake+0x100/0x260
>     do_futex+0x21c/0x2a0
>     sys_futex+0x98/0x270
>     system_call_exception+0x14c/0x2f0
>     system_call_vectored_common+0x15c/0x2ec
>
> The following code flow illustrates how the deadlock occurs:
>
>         CPU0                                   CPU1
>         ----                                   ----
>   spin_lock_irqsave(A)                          |
>   spin_unlock_irqrestore(A)                     |
>     spin_lock(B)                                |
>          |                                      |
>          ▼                                      |
>    id = qnodesp->count++;                       |
>   (Note that nodes[0].lock == A)                |
>          |                                      |
>          ▼                                      |
>       Interrupt                                 |
>   (happens before "nodes[0].lock = B")          |
>          |                                      |
>          ▼                                      |
>   spin_lock_irqsave(A)                          |
>          |                                      |
>          ▼                                      |
>    id = qnodesp->count++                        |
>    nodes[1].lock = A                            |
>          |                                      |
>          ▼                                      |
>   Tail of MCS queue                             |
>          |                             spin_lock_irqsave(A)
>          ▼                                      |
>   Head of MCS queue                             ▼
>          |                             CPU0 is previous tail
>          ▼                                      |
>    Spin indefinitely                            ▼
>   (until "nodes[1].next != NULL")      prev = get_tail_qnode(A, CPU0)
>                                                 |
>                                                 ▼
>                                        prev == &qnodes[CPU0].nodes[0]
>                                      (as qnodes[CPU0].nodes[0].lock == A)
>                                                 |
>                                                 ▼
>                                        WRITE_ONCE(prev->next, node)
>                                                 |
>                                                 ▼
>                                         Spin indefinitely
>                                      (until nodes[0].locked == 1)

I can follow your scenario, and agree it is a bug.

Generic qspinlock code does not have a similar path because it encodes
idx with the CPU in the spinlock word. The powerpc qspinlocks removed
that to save some bits in the word (to support more CPUs).

What probably makes it really difficult to hit is that I think both
locks A and B need contention from other sources to push them into
queueing slow path. I guess that's omitted for brevity in the flow
above, which is fine.

> Thanks to Saket Kumar Bhaskar for help with recreating the issue
>
> Fixes: 84990b169557 ("powerpc/qspinlock: add mcs queueing for contended waiters")
> Cc: stable@...r.kernel.org # v6.2+
> Reported-by: Geetika Moolchandani <geetika@...ux.ibm.com>
> Reported-by: Vaishnavi Bhat <vaish123@...ibm.com>
> Reported-by: Jijo Varghese <vargjijo@...ibm.com>
> Signed-off-by: Nysal Jan K.A. <nysal@...ux.ibm.com>
> ---
>  arch/powerpc/lib/qspinlock.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
> index 5de4dd549f6e..59861c665cef 100644
> --- a/arch/powerpc/lib/qspinlock.c
> +++ b/arch/powerpc/lib/qspinlock.c
> @@ -697,6 +697,12 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
>  	}
>  
>  release:
> +	/*
> +	 * Clear the lock, as another CPU might see stale values if an
> +	 * interrupt occurs after we increment qnodesp->count but before
> +	 * node->lock is initialized
> +	 */
> +	node->lock = NULL;
>  	qnodesp->count--; /* release the node */

AFAIKS this fix works.

There is one complication which is those two stores could be swapped by
the compiler. So we could take an IRQ here that sees the node has been
freed, but node->lock has not yet been cleared. Basically equivalent to
the problem solved by the barrier() on the count++ side.

This reordering would not cause a problem in your scenario AFAIKS
because when the lock call returns, node->lock *will* be cleared so it
can not cause a problem later.

Still, should we put a barrier() between these just to make things a
bit cleaner? I.e., when count is decremented, we definitely won't do
any other stores to node. Otherwise,

Reviewed-by: Nicholas Piggin <npiggin@...il.com>

Thanks,
Nick

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ