lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Jun 2010 02:05:24 +1000
From:	Nick Piggin <npiggin@...e.de>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	John Stultz <johnstul@...ibm.com>,
	Frank Mayhar <fmayhar@...gle.com>
Subject: Re: [patch 24/52] fs: dcache reduce d_parent locking

On Thu, Jun 24, 2010 at 08:32:18AM -0700, Paul E. McKenney wrote:
> On Fri, Jun 25, 2010 at 01:07:06AM +1000, Nick Piggin wrote:
> > On Thu, Jun 24, 2010 at 10:44:22AM +0200, Peter Zijlstra wrote:
> > > On Thu, 2010-06-24 at 13:02 +1000, npiggin@...e.de wrote:
> > > > Use RCU property of dcache to simplify locking in some places where we
> > > > take d_parent and d_lock.
> > > > 
> > > > Comment: don't need rcu_deref because we take the spinlock and recheck it.
> > > 
> > > But does the LOCK barrier imply a DATA DEPENDENCY barrier? (It does on
> > > x86, and the compiler barrier implied by spin_lock() suffices to replace
> > > ACCESS_ONCE()).
> > 
> > Well the dependency we care about is from loading the parent pointer
> > to acquiring its spinlock. But we can't possibly have stale data given
> > to the spin lock operation itself because it is a RMW.
> 
> As long as you check for the structure being valid after acquiring the
> lock, I agree.  Otherwise, I would be concerned about the following
> sequence of events:
> 
> 1.	CPU 0 picks up a pointer to a given data element.
> 
> 2.	CPU 1 removes this element from the list, drops any locks that
> 	it might have, and starts waiting for a grace period to
> 	elapse.
> 
> 3.	CPU 0 acquires the lock, does some operation that would
> 	be appropriate had the element not been removed, then
> 	releases the lock.
> 
> 4.	After the grace period, CPU 1 frees the element, negating
> 	CPU 0's hard work.
> 
> The usual approach is to have a "deleted" flag or some such in the
> element that CPU 0 would set when removing the element and that CPU 1
> would check after acquiring the lock.  Which you might well already
> be doing!  ;-)

Thanks, yep it's done under RCU, and after taking the lock it rechecks
to see that it is still reachable by the same pointer (and if not,
unlocks and retries) so it should be fine.

Thanks,
Nick

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ