lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 20 Jan 2021 13:42:53 -0800 From: "Paul E. McKenney" <paulmck@...nel.org> To: Sebastian Andrzej Siewior <bigeasy@...utronix.de> Cc: "Uladzislau Rezki (Sony)" <urezki@...il.com>, LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>, Michael Ellerman <mpe@...erman.id.au>, Andrew Morton <akpm@...ux-foundation.org>, Daniel Axtens <dja@...ens.net>, Frederic Weisbecker <frederic@...nel.org>, Neeraj Upadhyay <neeraju@...eaurora.org>, Joel Fernandes <joel@...lfernandes.org>, Peter Zijlstra <peterz@...radead.org>, Michal Hocko <mhocko@...e.com>, Thomas Gleixner <tglx@...utronix.de>, "Theodore Y . Ts'o" <tytso@....edu>, Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com> Subject: Re: [PATCH 3/3] kvfree_rcu: use migrate_disable/enable() On Wed, Jan 20, 2021 at 08:45:54PM +0100, Sebastian Andrzej Siewior wrote: > On 2021-01-20 17:21:48 [+0100], Uladzislau Rezki (Sony) wrote: > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -3489,10 +3489,12 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, > > (*krcp)->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) { > > bnode = get_cached_bnode(*krcp); > > if (!bnode && can_alloc) { > > + migrate_disable(); > > krc_this_cpu_unlock(*krcp, *flags); > > Why is krc_this_cpu_unlock() defined as > | static inline void > | krc_this_cpu_unlock(struct kfree_rcu_cpu *krcp, unsigned long flags) > | { > | raw_spin_unlock(&krcp->lock); > | local_irq_restore(flags); > | } > > instead of raw_spin_unlock_irqrestore()? > Should something with the locked section trigger a scheduling event by > setting TIF_NEED_RESCHED then there will be no scheduling event on > unlock. It will be delayed until a later "random" preempt_enable(). > > raw_spin_unlock_irqrestore() will reschedule if the flag is set, > local_irq_restore() will not. Good catch, thank you! This one is already in mainline, so I queued the following patch. Thanx, Paul ------------------------------------------------------------------------ commit 6c1d51e012c5b474cda77d4fa644d76e041c1e05 Author: Paul E. McKenney <paulmck@...nel.org> Date: Wed Jan 20 13:38:08 2021 -0800 kvfree_rcu: Make krc_this_cpu_unlock() use raw_spin_unlock_irqrestore() The krc_this_cpu_unlock() function does a raw_spin_unlock() immediately followed by a local_irq_restore(). This commit saves a line of code by merging them into a raw_spin_unlock_irqrestore(). This transformation also reduces scheduling latency because raw_spin_unlock_irqrestore() responds immediately to a reschedule request. In contrast, local_irq_restore() does a scheduling-oblivious enabling of interrupts. Reported-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de> Signed-off-by: Paul E. McKenney <paulmck@...nel.org> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cad3607..e7a226a 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3201,8 +3201,7 @@ krc_this_cpu_lock(unsigned long *flags) static inline void krc_this_cpu_unlock(struct kfree_rcu_cpu *krcp, unsigned long flags) { - raw_spin_unlock(&krcp->lock); - local_irq_restore(flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } static inline struct kvfree_rcu_bulk_data *
Powered by blists - more mailing lists