lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130207235318.GJ2545@linux.vnet.ibm.com>
Date:	Thu, 7 Feb 2013 15:53:18 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Michel Lespinasse <walken@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Ingo Molnar <mingo@...hat.com>,
	David Howells <dhowells@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Eric Dumazet <edumazet@...gle.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Manfred Spraul <manfred@...orfullife.com>,
	linux-kernel@...r.kernel.org, john.stultz@...aro.org
Subject: Re: [RFC PATCH 1/6] kernel: implement queue spinlock API

On Thu, Feb 07, 2013 at 02:56:49PM -0800, Eric Dumazet wrote:
> On Thu, 2013-02-07 at 14:34 -0800, Paul E. McKenney wrote:
> > On Tue, Jan 22, 2013 at 03:13:30PM -0800, Michel Lespinasse wrote:
> > > Introduce queue spinlocks, to be used in situations where it is desired
> > > to have good throughput even under the occasional high-contention situation.
> > > 
> > > This initial implementation is based on the classic MCS spinlock,
> > > because I think this represents the nicest API we can hope for in a
> > > fast queue spinlock algorithm. The MCS spinlock has known limitations
> > > in that it performs very well under high contention, but is not as
> > > good as the ticket spinlock under low contention. I will address these
> > > limitations in a later patch, which will propose an alternative,
> > > higher performance implementation using (mostly) the same API.
> > > 
> > > Sample use case acquiring mystruct->lock:
> > > 
> > >   struct q_spinlock_node node;
> > > 
> > >   q_spin_lock(&mystruct->lock, &node);
> > >   ...
> > >   q_spin_unlock(&mystruct->lock, &node);
> > 
> > It is possible to keep the normal API for MCS locks by having the lock
> > holder remember the parameter in the lock word itself.  While spinning,
> > the node is on the stack, is not needed once the lock is acquired.
> > The pointer to the next node in the queue -is- needed, but this can be
> > stored in the lock word.
> > 
> > I believe that John Stultz worked on something like this some years back,
> > so added him to CC.
> > 
> 
> Hmm...
> 
> This could easily break if the spin_lock() is embedded in a function,
> and the unlock done in another one.
> 
> (storage for the node would disappear at function epilogue )

But that is OK -- the storage is used only for spinning on.  Once a given
task has actually acquired the lock, that storage is no longer needed.
What -is- needed is the pointer to the next CPU's node, and that node
is guaranteed to persist until the next CPU acquires the lock, which
cannot happen until this CPU releases that lock.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ