lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 30 Sep 2021 00:10:02 +0200 From: Frederic Weisbecker <frederic@...nel.org> To: "Paul E . McKenney" <paulmck@...nel.org> Cc: LKML <linux-kernel@...r.kernel.org>, Frederic Weisbecker <frederic@...nel.org>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Peter Zijlstra <peterz@...radead.org>, Uladzislau Rezki <urezki@...il.com>, Valentin Schneider <valentin.schneider@....com>, Thomas Gleixner <tglx@...utronix.de>, Boqun Feng <boqun.feng@...il.com>, Neeraj Upadhyay <neeraju@...eaurora.org>, Josh Triplett <josh@...htriplett.org>, Joel Fernandes <joel@...lfernandes.org>, rcu@...r.kernel.org Subject: [PATCH 01/11] rcu/nocb: Make local rcu_nocb_lock_irqsave() safe against concurrent deoffloading rcu_nocb_lock_irqsave() can be preempted between the call to rcu_segcblist_is_offloaded() and the actual locking. This matters now that rcu_core() is preemptible on PREEMPT_RT and the (de-)offloading process can interrupt the softirq or the rcuc kthread. As a result we may locklessly call into code that requires nocb locking. In practice this is a problem while we accelerate callbacks on rcu_core(). Simply disabling interrupts before (instead of after) checking the NOCB offload state fixes the issue. Reported-by: Valentin Schneider <valentin.schneider@....com> Signed-off-by: Frederic Weisbecker <frederic@...nel.org> Cc: Valentin Schneider <valentin.schneider@....com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de> Cc: Josh Triplett <josh@...htriplett.org> Cc: Joel Fernandes <joel@...lfernandes.org> Cc: Boqun Feng <boqun.feng@...il.com> Cc: Neeraj Upadhyay <neeraju@...eaurora.org> Cc: Uladzislau Rezki <urezki@...il.com> Cc: Thomas Gleixner <tglx@...utronix.de> --- kernel/rcu/tree.h | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 70188cb42473..deeaf2fee714 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -439,12 +439,16 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp, static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp); #ifdef CONFIG_RCU_NOCB_CPU static void __init rcu_organize_nocb_kthreads(void); -#define rcu_nocb_lock_irqsave(rdp, flags) \ -do { \ - if (!rcu_segcblist_is_offloaded(&(rdp)->cblist)) \ - local_irq_save(flags); \ - else \ - raw_spin_lock_irqsave(&(rdp)->nocb_lock, (flags)); \ + +/* + * Disable IRQs before checking offloaded state so that local + * locking is safe against concurrent de-offloading. + */ +#define rcu_nocb_lock_irqsave(rdp, flags) \ +do { \ + local_irq_save(flags); \ + if (rcu_segcblist_is_offloaded(&(rdp)->cblist)) \ + raw_spin_lock(&(rdp)->nocb_lock); \ } while (0) #else /* #ifdef CONFIG_RCU_NOCB_CPU */ #define rcu_nocb_lock_irqsave(rdp, flags) local_irq_save(flags) -- 2.25.1
Powered by blists - more mailing lists