lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53050657.1030306@hp.com>
Date:	Wed, 19 Feb 2014 14:30:31 -0500
From:	Waiman Long <waiman.long@...com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
	linux-arch@...r.kernel.org, x86@...nel.org,
	linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <andi@...stfloor.org>,
	Rik van Riel <riel@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	George Spelvin <linux@...izon.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Daniel J Blueman <daniel@...ascale.com>,
	Alexander Fyodorov <halcy@...dex.ru>,
	Aswin Chandramouleeswaran <aswin@...com>,
	Scott J Norton <scott.norton@...com>,
	Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@...com>
Subject: Re: [PATCH v4 1/3] qspinlock: Introducing a 4-byte queue spinlock
 implementation

On 02/19/2014 03:55 AM, Peter Zijlstra wrote:
> On Tue, Feb 18, 2014 at 07:58:49PM -0500, Waiman Long wrote:
>> On 02/18/2014 04:37 PM, Peter Zijlstra wrote:
>>> On Tue, Feb 18, 2014 at 02:39:31PM -0500, Waiman Long wrote:
>>>>>> +	/*
>>>>>> +	 * At the head of the wait queue now
>>>>>> +	 */
>>>>>> +	while (true) {
>>>>>> +		u32 qcode;
>>>>>> +		int retval;
>>>>>> +
>>>>>> +		retval = queue_get_lock_qcode(lock,&qcode, my_qcode);
>>>>>> +		if (retval>    0)
>>>>>> +			;	/* Lock not available yet */
>>>>>> +		else if (retval<    0)
>>>>>> +			/* Lock taken, can release the node&    return */
>>>>>> +			goto release_node;
>>>>>> +		else if (qcode != my_qcode) {
>>>>>> +			/*
>>>>>> +			 * Just get the lock with other spinners waiting
>>>>>> +			 * in the queue.
>>>>>> +			 */
>>>>>> +			if (queue_spin_trylock_unfair(lock))
>>>>>> +				goto notify_next;
>>>>> Why is this an option at all?
>>>>>
>>>>>
>>>> Are you referring to the case (qcode != my_qcode)? This condition will be
>>>> true if more than one tasks have queued up.
>>> But in no case should we revert to unfair spinning or stealing. We
>>> should always respect the queueing order.
>>>
>>> If the lock tail no longer points to us, then there's further waiters
>>> and we should wait for ->next and unlock it -- after we've taken the
>>> lock.
>>>
>> A task will be in this loop when it is already the head of a queue and is
>> entitled to take the lock. The condition (qcode != my_qcode) is to decide
>> whether it should just take the lock or take the lock&  clear the code
>> simultaneously. I am a bit cautious to use queue_spin_trylock_unfair() as
>> there is a possibility that a CPU may run out of the queue node and need to
>> do unfair busy spinning.
> No; there is no such possibility. Add BUG_ON(idx>=4) and make sure of
> it.

Yes, I could do that.

However in the generic implementation, I still need some kind of atomic 
cmpxchg to set the lock bit. I could probably just do a simple 
assignment of 1 to the lock byte in x86.

> There's simply no more than 4 contexts what can nest at any one time:
>
>    task context
>    softirq context
>    hardirq context
>    nmi context
>
> And someone contending a spinlock from NMI context should be shot
> anyway.
>
> Getting more nested spinlocks is an absolute hard fail.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ