lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <563B86F3.9080703@hpe.com>
Date:	Thu, 05 Nov 2015 11:42:27 -0500
From:	Waiman Long <waiman.long@....com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH tip/locking/core v9 2/6] locking/qspinlock: prefetch next
 node cacheline

On 11/02/2015 05:54 PM, Peter Zijlstra wrote:
> On Mon, Nov 02, 2015 at 05:36:26PM +0100, Peter Zijlstra wrote:
>> On Fri, Oct 30, 2015 at 07:26:33PM -0400, Waiman Long wrote:
>>> @@ -426,6 +437,15 @@ queue:
>>>   		cpu_relax();
>>>
>>>   	/*
>>> +	 * If the next pointer is defined, we are not tail anymore.
>>> +	 * In this case, claim the spinlock&  release the MCS lock.
>>> +	 */
>>> +	if (next) {
>>> +		set_locked(lock);
>>> +		goto mcs_unlock;
>>> +	}
>>> +
>>> +	/*
>>>   	 * claim the lock:
>>>   	 *
>>>   	 * n,0,0 ->  0,0,1 : lock, uncontended
>>> @@ -458,6 +478,7 @@ queue:
>>>   	while (!(next = READ_ONCE(node->next)))
>>>   		cpu_relax();
>>>
>>> +mcs_unlock:
>>>   	arch_mcs_spin_unlock_contended(&next->locked);
>>>   	pv_kick_node(lock, next);
>>>
>> This however appears an independent optimization. Is it worth it? Would
>> we not already have observed a val != tail in this case? At which point
>> we're just adding extra code for no gain.
>>
>> That is, if we observe @next, must we then not also observe val != tail?
> Not quite; the ordering is the other way around. If we observe next we
> must also observe val != tail. But its a narrow thing. Is it really
> worth it?

If we observe next, we will observe val != tail sooner or later. It is 
not possible for it to clear the tail code in the lock. The tail xchg 
will guarantee that.

Another alternative is to do something like

+    if (!next)
          while (!(next = READ_ONCE(node->next)))
             cpu_relax();

Please let me know if that is more acceptable to you.

Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ