lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 18 Feb 2020 19:59:20 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Lai Jiangshan <laijs@...ux.alibaba.com>,
        linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Josh Triplett <josh@...htriplett.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Lai Jiangshan <jiangshanlai@...il.com>, rcu@...r.kernel.org
Subject: Re: [PATCH V2 1/7] rcu: use preempt_count to test whether scheduler
 locks is held

On Tue, Feb 18, 2020 at 10:31:47PM -0500, Joel Fernandes wrote:
> On Sat, Nov 02, 2019 at 12:45:53PM +0000, Lai Jiangshan wrote:
> > Ever since preemption was introduced to linux kernel,
> > irq disabled spinlocks are always held with preemption
> > disabled. One of the reason is that sometimes we need
> > to use spin_unlock() which will do preempt_enable()
> > to unlock the irq disabled spinlock with keeping irq
> > disabled. So preempt_count can be used to test whether
> > scheduler locks is possible held.
> > 
> > CC: Peter Zijlstra <peterz@...radead.org>
> > Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
> > ---
> >  kernel/rcu/tree_plugin.h | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 0982e9886103..aba5896d67e3 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -603,10 +603,14 @@ static void rcu_read_unlock_special(struct task_struct *t)
> >  		      tick_nohz_full_cpu(rdp->cpu);
> >  		// Need to defer quiescent state until everything is enabled.
> >  		if (irqs_were_disabled && use_softirq &&
> > -		    (in_interrupt() ||
> > -		     (exp && !t->rcu_read_unlock_special.b.deferred_qs))) {
> > +		    (in_interrupt() || (exp && !preempt_bh_were_disabled))) {
> >  			// Using softirq, safe to awaken, and we get
> >  			// no help from enabling irqs, unlike bh/preempt.
> > +			// in_interrupt(): raise_softirq_irqoff() is
> > +			// guaranteed not to not do wakeup
> > +			// !preempt_bh_were_disabled: scheduler locks cannot
> > +			// be held, since spinlocks are always held with
> > +			// preempt_disable(), so the wakeup will be safe.
> 
> This means if preemption is disabled for any reason (other than scheduler
> locks), such as acquiring an unrelated lock that is not held by the
> scheduler, then the softirq would not be raised even if it was safe to
> do so. From that respect, it seems a step back no?

This patch was one of the things motivating me to turn tick on for
nohz_full CPUs that spend too long in the kernel.  Given that change,
this patch can be (and recently was) made more straightforward.  Prior to
the nohz_full change, Lai was kind of between a rock and a hard place
on this one.  ;-)

							Thanx, Paul

> thanks,
> 
>  - Joel
> 
> 
> >  			raise_softirq_irqoff(RCU_SOFTIRQ);
> >  		} else {
> >  			// Enabling BH or preempt does reschedule, so...
> > -- 
> > 2.20.1
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ