[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181001071641.19282-3-jgross@suse.com>
Date: Mon, 1 Oct 2018 09:16:41 +0200
From: Juergen Gross <jgross@...e.com>
To: linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
x86@...nel.org
Cc: boris.ostrovsky@...cle.com, hpa@...or.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, Juergen Gross <jgross@...e.com>,
stable@...r.kernel.org, Waiman.Long@...com, peterz@...radead.org
Subject: [PATCH 2/2] xen: make xen_qlock_wait() nestable
xen_qlock_wait() isn't safe for nested calls due to interrupts. A call
of xen_qlock_kick() might be ignored in case a deeper nesting level
was active right before the call of xen_poll_irq():
CPU 1: CPU 2:
spin_lock(lock1)
spin_lock(lock1)
-> xen_qlock_wait()
-> xen_clear_irq_pending()
Interrupt happens
spin_unlock(lock1)
-> xen_qlock_kick(CPU 2)
spin_lock_irqsave(lock2)
spin_lock_irqsave(lock2)
-> xen_qlock_wait()
-> xen_clear_irq_pending()
clears kick for lock1
-> xen_poll_irq()
spin_unlock_irq_restore(lock2)
-> xen_qlock_kick(CPU 2)
wakes up
spin_unlock_irq_restore(lock2)
IRET
resumes in xen_qlock_wait()
-> xen_poll_irq()
never wakes up
The solution is to disable interrupts in xen_qlock_wait() and not to
poll for the irq in case xen_qlock_wait() is called in nmi context.
Cc: stable@...r.kernel.org
Cc: Waiman.Long@...com
Cc: peterz@...radead.org
Signed-off-by: Juergen Gross <jgross@...e.com>
---
arch/x86/xen/spinlock.c | 24 ++++++++++--------------
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index cd210a4ba7b1..e8d880e98057 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -39,29 +39,25 @@ static void xen_qlock_kick(int cpu)
*/
static void xen_qlock_wait(u8 *byte, u8 val)
{
+ unsigned long flags;
int irq = __this_cpu_read(lock_kicker_irq);
/* If kicker interrupts not initialized yet, just spin */
- if (irq == -1)
+ if (irq == -1 || in_nmi())
return;
- /* If irq pending already clear it and return. */
+ /* Guard against reentry. */
+ local_irq_save(flags);
+
+ /* If irq pending already clear it. */
if (xen_test_irq_pending(irq)) {
xen_clear_irq_pending(irq);
- return;
+ } else if (READ_ONCE(*byte) == val) {
+ /* Block until irq becomes pending (or a spurious wakeup) */
+ xen_poll_irq(irq);
}
- if (READ_ONCE(*byte) != val)
- return;
-
- /*
- * If an interrupt happens here, it will leave the wakeup irq
- * pending, which will cause xen_poll_irq() to return
- * immediately.
- */
-
- /* Block until irq becomes pending (or perhaps a spurious wakeup) */
- xen_poll_irq(irq);
+ local_irq_restore(flags);
}
static irqreturn_t dummy_handler(int irq, void *dev_id)
--
2.16.4
Powered by blists - more mailing lists