lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 7 Feb 2013 21:03:42 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Michel Lespinasse <walken@...gle.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Rik van Riel <riel@...hat.com>,
	Ingo Molnar <mingo@...hat.com>,
	David Howells <dhowells@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Eric Dumazet <edumazet@...gle.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Manfred Spraul <manfred@...orfullife.com>,
	linux-kernel@...r.kernel.org, john.stultz@...aro.org
Subject: Re: [RFC PATCH 1/6] kernel: implement queue spinlock API

On Thu, Feb 07, 2013 at 08:36:43PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 07, 2013 at 07:48:33PM -0800, Michel Lespinasse wrote:
> > On Thu, Feb 7, 2013 at 4:40 PM, Paul E. McKenney
> > <paulmck@...ux.vnet.ibm.com> wrote:
> > > On Thu, Feb 07, 2013 at 04:03:54PM -0800, Eric Dumazet wrote:
> > >> It adds yet another memory write to store the node pointer in the
> > >> lock...
> > >>
> > >> I suspect it's going to increase false sharing.
> > >
> > > On the other hand, compared to straight MCS, it reduces the need to
> > > pass the node address around.  Furthermore, the node pointer is likely
> > > to be in the same cache line as the lock word itself, and finally
> > > some architectures can do a double-pointer store.
> > >
> > > Of course, it might well be slower, but it seems like it is worth
> > > giving it a try.
> > 
> > Right. Another nice point about this approach is that there needs to
> > be only one node per spinning CPU, so the node pointers (both tail and
> > next) might be replaced with CPU identifiers, which would bring the
> > spinlock size down to the same as with the ticket spinlock (which in
> > turns makes it that much more likely that we'll have atomic stores of
> > that size).
> 
> Good point!  I must admit that this is one advantage of having the
> various _irq spinlock acquisition primitives disable irqs before
> spinning.  ;-)

Right...  For spinlocks that -don't- disable irqs, you need to deal with
the possibility that a CPU gets interrupted while spinning, and the
interrupt handler also tries to acquire a queued lock.  One way to deal
with this is to have a node per CPUxirq.  Of course, if interrupts
handlers always disable irqs when acquiring a spinlock, then you only
need CPUx2.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ