lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANN689G0N0ynSMVcH9CbOFgL_mLSswOBx5yBzPBag0AO9fk8+A@mail.gmail.com>
Date:	Thu, 7 Feb 2013 15:58:20 -0800
From:	Michel Lespinasse <walken@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	paulmck@...ux.vnet.ibm.com, Rik van Riel <riel@...hat.com>,
	Ingo Molnar <mingo@...hat.com>,
	David Howells <dhowells@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Eric Dumazet <edumazet@...gle.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Manfred Spraul <manfred@...orfullife.com>,
	linux-kernel@...r.kernel.org, john.stultz@...aro.org
Subject: Re: [RFC PATCH 1/6] kernel: implement queue spinlock API

On Thu, Feb 7, 2013 at 2:56 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Thu, 2013-02-07 at 14:34 -0800, Paul E. McKenney wrote:
>> On Tue, Jan 22, 2013 at 03:13:30PM -0800, Michel Lespinasse wrote:
>> > Introduce queue spinlocks, to be used in situations where it is desired
>> > to have good throughput even under the occasional high-contention situation.
>> >
>> > This initial implementation is based on the classic MCS spinlock,
>> > because I think this represents the nicest API we can hope for in a
>> > fast queue spinlock algorithm. The MCS spinlock has known limitations
>> > in that it performs very well under high contention, but is not as
>> > good as the ticket spinlock under low contention. I will address these
>> > limitations in a later patch, which will propose an alternative,
>> > higher performance implementation using (mostly) the same API.
>> >
>> > Sample use case acquiring mystruct->lock:
>> >
>> >   struct q_spinlock_node node;
>> >
>> >   q_spin_lock(&mystruct->lock, &node);
>> >   ...
>> >   q_spin_unlock(&mystruct->lock, &node);
>>
>> It is possible to keep the normal API for MCS locks by having the lock
>> holder remember the parameter in the lock word itself.  While spinning,
>> the node is on the stack, is not needed once the lock is acquired.
>> The pointer to the next node in the queue -is- needed, but this can be
>> stored in the lock word.
>>
>> I believe that John Stultz worked on something like this some years back,
>> so added him to CC.
>>
>
> Hmm...
>
> This could easily break if the spin_lock() is embedded in a function,
> and the unlock done in another one.
>
> (storage for the node would disappear at function epilogue )

No, I think that's doable. The trick would be that once a thread
acquires the lock, the only remaining use of the node is to receive
the 'next' pointer if/when another thread starts contending for the
lock. So if the lock state would need to distinguish between a lock
that is currently locked but not contended (the next value would then
be NULL), and a lock that is currently locked and contended (the
lock->next value is the node that goes after the current lock owner)

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ