[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FFD6BDF.2050609@linux.vnet.ibm.com>
Date: Wed, 11 Jul 2012 17:34:47 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Christian Borntraeger <borntraeger@...ibm.com>
CC: Avi Kivity <avi@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Marcelo Tosatti <mtosatti@...hat.com>,
Ingo Molnar <mingo@...hat.com>, Rik van Riel <riel@...hat.com>,
S390 <linux-s390@...r.kernel.org>,
Carsten Otte <cotte@...ibm.com>, KVM <kvm@...r.kernel.org>,
chegu vinod <chegu_vinod@...com>,
"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>, X86 <x86@...nel.org>,
Gleb Natapov <gleb@...hat.com>, linux390@...ibm.com,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
Joerg Roedel <joerg.roedel@....com>,
Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
Subject: Re: [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler
On 07/11/2012 05:25 PM, Christian Borntraeger wrote:
> On 11/07/12 13:51, Raghavendra K T wrote:
>>>>> Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for spinlocks, though.
>>>>
>>>> Perhaps x86 should copy this.
>>>
>>> See arch/s390/lib/spinlock.c
>>> The basic idea is using several heuristics:
>>> - loop for a given amount of loops
>>> - check if the lock holder is currently scheduled by the hypervisor
>>> (smp_vcpu_scheduled, which uses the sigp sense running instruction)
>>> Dont know if such thing is available for x86. It must be a lot cheaper
>>> than a guest exit to be useful
>>
>> Unfortunately we do not have information on lock-holder.
>
> That would be an independent patch and requires guest changes.
>
Yes, AFAI think, there are two options:
(1) extend lock and use spare bit in ticketlock indicate lock is held
(2) use percpu list entry.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists