lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CE19FD1.2000804@goop.org>
Date:	Mon, 15 Nov 2010 13:02:09 -0800
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	"H. Peter Anvin" <hpa@...or.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Jan Beulich <JBeulich@...ell.com>, Avi Kivity <avi@...hat.com>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	Linux Virtualization <virtualization@...ts.linux-foundation.org>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Subject: Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization

On 11/15/2010 12:14 PM, Peter Zijlstra wrote:
> On Mon, 2010-11-15 at 12:03 -0800, H. Peter Anvin wrote:
>> On 11/15/2010 12:00 PM, Jeremy Fitzhardinge wrote:
>>> Another approach I discussed with PeterZ and Mathieu is to steal the LSB
>>> of the ticket counters (halving the max CPU count) to use as a "there is
>>> someone in slowpath waiting on this lock".  But I haven't spent the time
>>> to work out an algorithm to maintain that flag (or flags, since there
>>> are bits available) in a correct and efficient way.
>>>
>> Definitely worth pondering.
> Right, so the idea was to make the ticket increment 2, which would leave
> the LSB of both the head and tail available. I think that if one were to
> set both (using a cmpxchg), the ticket fast-path wouldn't need any
> changes since head==tail is still the correct condition for acquisition.
>
> Then the unlock needs an added conditional:
>   if (tail & 1) 
> 	unlock_slowpath()

The tricky part is knowing how to clear the bit(s) on the last person
dropping out of the slow path, and making that race-free with respect to
new lockers entering the slow path.  I guess you could leave it in
slowpath state until you're the last unlocker (ie, you're unlocking into
uncontended state), whereupon you also clear the bits; I guess that
would probably need a cmpxchg to make it safe WRT new lockers entering
slowpath.

As a heuristic, it shouldn't be too bad performancewise, since
(handwaving) if ticketholder N has entered the slowpath, then its likely
that N+1 will as well.

    J

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ