lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FFD5DA3.3010001@redhat.com>
Date:	Wed, 11 Jul 2012 14:04:03 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Christian Borntraeger <borntraeger@...ibm.com>
CC:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Ingo Molnar <mingo@...hat.com>, Rik van Riel <riel@...hat.com>,
	S390 <linux-s390@...r.kernel.org>,
	Carsten Otte <cotte@...ibm.com>, KVM <kvm@...r.kernel.org>,
	chegu vinod <chegu_vinod@...com>,
	"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>, X86 <x86@...nel.org>,
	Gleb Natapov <gleb@...hat.com>, linux390@...ibm.com,
	Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
	Joerg Roedel <joerg.roedel@....com>,
	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>,
	Alexander Graf <agraf@...e.de>,
	Paul Mackerras <paulus@...ba.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler

On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
> On 11/07/12 11:06, Avi Kivity wrote:
> [...]
>>> Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for spinlocks, though.
>> 
>> Perhaps x86 should copy this.
> 
> See arch/s390/lib/spinlock.c
> The basic idea is using several heuristics:
> - loop for a given amount of loops
> - check if the lock holder is currently scheduled by the hypervisor
>   (smp_vcpu_scheduled, which uses the sigp sense running instruction)
>   Dont know if such thing is available for x86. It must be a lot cheaper
>   than a guest exit to be useful

We could make it available via shared memory, updated using preempt
notifiers.  Of course piling on more pv makes this less attractive.

> - if lock holder is not running and we looped for a while do a directed
>   yield to that cpu.
> 
>> 
>>> So there is no win here, but there are other cases were diag44 is used, e.g. cpu_relax.
>>> I have to double check with others, if these cases are critical, but for now, it seems 
>>> that your dummy implementation  for s390 is just fine. After all it is a no-op until 
>>> we implement something.
>> 
>> Does the data structure make sense for you?  If so we can move it to
>> common code (and manage it in kvm_vcpu_on_spin()).  We can guard it with
>> CONFIG_KVM_HAVE_CPU_RELAX_INTERCEPT or something, so other archs don't
>> have to pay anything.
> 
> Ignoring the name,

What name would you suggest?

> yes the data structure itself seems based on the algorithm
> and not on arch specific things. That should work. If we move that to common 
> code then s390 will use that scheme automatically for the cases were we call 
> kvm_vcpu_on_spin(). All others archs as well.

ARM doesn't have an instruction for cpu_relax(), so it can't intercept
it.  Given ppc's dislike of overcommit, and the way it implements
cpu_relax() by adjusting hw thread priority, I'm guessing it doesn't
intercept those either, but I'm copying the ppc people in case I'm
wrong.  So it's s390 and x86.

> So this would probably improve guests that uses cpu_relax, for example
> stop_machine_run. I have no measurements, though.

smp_call_function() too (though that can be converted to directed yield
too).  It seems worthwhile.

-- 
error compiling committee.c: too many arguments to function


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ