[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130208043643.GN2545@linux.vnet.ibm.com>
Date: Thu, 7 Feb 2013 20:36:43 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Michel Lespinasse <walken@...gle.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Rik van Riel <riel@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
David Howells <dhowells@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Eric Dumazet <edumazet@...gle.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Manfred Spraul <manfred@...orfullife.com>,
linux-kernel@...r.kernel.org, john.stultz@...aro.org
Subject: Re: [RFC PATCH 1/6] kernel: implement queue spinlock API
On Thu, Feb 07, 2013 at 07:48:33PM -0800, Michel Lespinasse wrote:
> On Thu, Feb 7, 2013 at 4:40 PM, Paul E. McKenney
> <paulmck@...ux.vnet.ibm.com> wrote:
> > On Thu, Feb 07, 2013 at 04:03:54PM -0800, Eric Dumazet wrote:
> >> It adds yet another memory write to store the node pointer in the
> >> lock...
> >>
> >> I suspect it's going to increase false sharing.
> >
> > On the other hand, compared to straight MCS, it reduces the need to
> > pass the node address around. Furthermore, the node pointer is likely
> > to be in the same cache line as the lock word itself, and finally
> > some architectures can do a double-pointer store.
> >
> > Of course, it might well be slower, but it seems like it is worth
> > giving it a try.
>
> Right. Another nice point about this approach is that there needs to
> be only one node per spinning CPU, so the node pointers (both tail and
> next) might be replaced with CPU identifiers, which would bring the
> spinlock size down to the same as with the ticket spinlock (which in
> turns makes it that much more likely that we'll have atomic stores of
> that size).
Good point! I must admit that this is one advantage of having the
various _irq spinlock acquisition primitives disable irqs before
spinning. ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists