lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Aug 2013 12:43:18 +1000
From:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Michael Neuling <mikey@...ling.org>,
	Ingo Molnar <mingo@...nel.org>,
	Waiman Long <Waiman.Long@...com>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Jeff Layton <jlayton@...hat.com>,
	Miklos Szeredi <mszeredi@...e.cz>,
	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Andi Kleen <andi@...stfloor.org>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH v7 1/4] spinlock: A new lockref structure for lockless
 update of refcount

On Thu, 2013-08-29 at 19:31 -0700, Linus Torvalds wrote:

> Also, on x86, there are no advantages to cmpxchg over a spinlock -
> they are both exactly one equally serializing instruction. If
> anything, cmpxchg is worse due to having a cache read before the
> write, and a few cycles slower anyway. So I actually expect the x86
> code to slow down a tiny bit for the single-threaded case, although
> that should be hopefully unmeasurable.
> 
> On POWER, you may have much less serialization for the cmpxchg. That
> may sadly be something we'll need to fix - the serialization between
> getting a lockref and checking sequence counts etc may need some extra
> work.

> So it may be that you are seeing unrealistically good numbers, and
> that we will need to add a memory barrier or two. On x86, due to the
> locked instruction semantics, that just isn't an issue.

Dunno, our cmpxhg has both acquire and release barriers. It basically
does release, xchg, then acquire. So it is equivalent to an unlock
followed by a lock.

> > The numbers move around about 10% from run to run.
> 
> Please note that the whole "dentry hash chains may be better" for one
> run vs another, and that's something that will _persist_ between
> subsequent runs, so you may see "only 10% variability", but there may
> be a bigger picture variability that you're not noticing because you
> had to reboot in between.
> 
> To be really comparable, you should really run the stupid benchmark
> after fairly equal boot up sequences. If the machine had been up for
> several days for one set of numbers, and freshly rebooted for the
> other, it can be a very unfair comparison.
> 
> (I long ago had a nice "L1 dentry cache" patch that helped with the
> fact that the dentry chains *can* get long especially if you have tons
> of memory, and that helped with this kind of variability a lot - and
> improved performance too. It was slightly racy, though, which is why
> it never got merged).
> 
> > powerpc patch below. I'm using arch_spin_is_locked() to implement
> > arch_spin_value_unlocked().
> 
> Your "slock" is of type "volatile unsigned int slock", so it may well
> cause those temporaries to be written to memory.
> 
> It probably doesn't matter, but you may want to check that the result
> of "make lib/lockref.s" looks ok.
> 
>                  Linus


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists