lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 30 Jun 2010 15:28:07 +0200
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Jan Beulich <JBeulich@...ell.com>
CC:	"mingo@...e.hu" <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	Ky Srinivasan <KSrinivasan@...ell.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"hpa@...or.com" <hpa@...or.com>
Subject: Re: [PATCH 1/4, v2] x86: enlightenment for ticket spin locks -	 base
 implementation

On 06/30/2010 03:21 PM, Jan Beulich wrote:
>>>> On 30.06.10 at 14:53, Jeremy Fitzhardinge <jeremy@...p.org> wrote:
>>>>         
>> On 06/30/2010 01:52 PM, Jan Beulich wrote:
>>     
>>> I fail to see that: Depending on the hypervisor's capabilities, the
>>> two main functions could be much smaller (potentially there wouldn't
>>> even be a need for the unlock hook in some cases),
>>>       
>> What mechanism are you envisaging in that case?
>>     
> A simple yield is better than not doing anything at all.
>   

Is that true?  The main problem with ticket locks is that it requires
the host scheduler to schedule the correct "next" vcpu after unlock.  If
the vcpus are just bouncing in and out of the scheduler with yields then
there's still no guarantee that the host scheduler will pick the right
vcpu at anything like the right time.  I guess if a vcpu knows its next
it can plain spin while everyone else yields and that would work
approximately OK.

>>> The list really juts is needed to not pointlessly tickle CPUs that
>>> won't own the just released lock next anyway (or would own
>>> it, but meanwhile went for another one where they also decided
>>> to go into polling mode).
>>>       
>> Did you measure that it was a particularly common case which was worth
>> optimising for?
>>     
> I didn't measure this particular case. But since the main problem
> with ticket locks is when (host) CPUs are overcommitted, it
> certainly is a bad idea to create even more load on the host than
> there already is (the more that these are bursts).
>   

A directed wakeup is important, but I'm not sure how important its
efficiency is (since you're already deep in slowpath if it makes a
difference at all).

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ