lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180522002823.GP3803@linux.vnet.ibm.com>
Date:   Mon, 21 May 2018 17:28:23 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Joel Fernandes <joelaf@...gle.com>, linux-kernel@...r.kernel.org,
        Josh Triplett <josh@...htriplett.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Lai Jiangshan <jiangshanlai@...il.com>, byungchul.park@....com,
        kernel-team@...roid.com
Subject: Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing
 its gp_seq_needed

On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > after the access.
> > 
> > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> 
> I was talking about protecting the internal node's rnp->gp_seq_needed, not
> the rnp_start's gp_seq_needed.

Ah, good point, I missed the "if" condition.  This can be argued to work,
sort of, given that we still hold the leaf rcu_node structure's lock,
so that there is a limit to how far grace periods can advance.

But the code would of course be much cleaner with your change.

> We are protecting them in the loop:
> 
> like this:
> for(...)
> 	if (rnp != rnp_start)
> 		raw_spin_lock_rcu_node(rnp);
> 	[...]
> 	// access rnp->gp_seq and rnp->gp_seq_needed
> 	[...]
> 	if (rnp != rnp_start)
> 		raw_spin_unlock_rcu_node(rnp);
> 
> But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> node can be accessed locklessly, then why can't that be done also in the
> funnel locking loop - after all we are holding the rnp_start's lock through
> out right?

I was focused on the updates, and missed the rnp->gp_seq_req access in the
"if" statement.  The current code does sort of work, but only assuming
that the compiler doesn't tear the load, and so your change would help.
Could you please resend with your other two updated patches?  It depends
on one of the earlier patches, so does not apply cleanly as-is.  I could
hand-apply it, but that sounds like a good way to make your updated
series fail to apply.  ;-)

But could you also make the commit log explicitly call out the "if"
condition as being the offending access?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ