[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <51E6B69F.5010608@linux.vnet.ibm.com>
Date: Wed, 17 Jul 2013 20:52:07 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Gleb Natapov <gleb@...hat.com>
CC: mingo@...hat.com, jeremy@...p.org, x86@...nel.org,
konrad.wilk@...cle.com, hpa@...or.com, pbonzini@...hat.com,
linux-doc@...r.kernel.org, habanero@...ux.vnet.ibm.com,
xen-devel@...ts.xensource.com, peterz@...radead.org,
mtosatti@...hat.com, stefano.stabellini@...citrix.com,
andi@...stfloor.org, ouyang@...pitt.edu, agraf@...e.de,
chegu_vinod@...com, torvalds@...ux-foundation.org,
avi.kivity@...il.com, tglx@...utronix.de, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, riel@...hat.com, drjones@...hat.com,
virtualization@...ts.linux-foundation.org,
srivatsa.vaddagiri@...il.com
Subject: Re: [PATCH RFC V10 15/18] kvm : Paravirtual ticketlocks support for
linux guests running on KVM hypervisor
On 07/17/2013 08:41 PM, Gleb Natapov wrote:
> On Wed, Jul 17, 2013 at 08:25:19PM +0530, Raghavendra K T wrote:
>> On 07/17/2013 08:14 PM, Gleb Natapov wrote:
>>> On Wed, Jul 17, 2013 at 07:43:01PM +0530, Raghavendra K T wrote:
>>>> On 07/17/2013 06:55 PM, Gleb Natapov wrote:
>>>>> On Wed, Jul 17, 2013 at 06:25:05PM +0530, Raghavendra K T wrote:
>>>>>> On 07/17/2013 06:15 PM, Gleb Natapov wrote:
>>>>>>> On Wed, Jul 17, 2013 at 03:35:37PM +0530, Raghavendra K T wrote:
>>>>>>>>>> Instead of halt we started with a sleep hypercall in those
>>>>>>>>>> versions. Changed to halt() once Avi suggested to reuse existing sleep.
>>>>>>>>>>
>>>>>>>>>> If we use older hypercall with few changes like below:
>>>>>>>>>>
>>>>>>>>>> kvm_pv_wait_for_kick_op(flags, vcpu, w->lock )
>>>>>>>>>> {
>>>>>>>>>> // a0 reserved for flags
>>>>>>>>>> if (!w->lock)
>>>>>>>>>> return;
>>>>>>>>>> DEFINE_WAIT
>>>>>>>>>> ...
>>>>>>>>>> end_wait
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>> How would this help if NMI takes lock in critical section. The thing
>>>>>>>>> that may happen is that lock_waiting->want may have NMI lock value, but
>>>>>>>>> lock_waiting->lock will point to non NMI lock. Setting of want and lock
>>>>>>>>> have to be atomic.
>>>>>>>>
>>>>>>>> True. so we are here
>>>>>>>>
>>>>>>>> non NMI lock(a)
>>>>>>>> w->lock = NULL;
>>>>>>>> smp_wmb();
>>>>>>>> w->want = want;
>>>>>>>> NMI
>>>>>>>> <---------------------
>>>>>>>> NMI lock(b)
>>>>>>>> w->lock = NULL;
>>>>>>>> smp_wmb();
>>>>>>>> w->want = want;
>>>>>>>> smp_wmb();
>>>>>>>> w->lock = lock;
>>>>>>>> ---------------------->
>>>>>>>> smp_wmb();
>>>>>>>> w->lock = lock;
>>>>>>>>
>>>>>>>> so how about fixing like this?
>>>>>>>>
>>>>>>>> again:
>>>>>>>> w->lock = NULL;
>>>>>>>> smp_wmb();
>>>>>>>> w->want = want;
>>>>>>>> smp_wmb();
>>>>>>>> w->lock = lock;
>>>>>>>>
>>>>>>>> if (!lock || w->want != want) goto again;
>>>>>>>>
>>>>>>> NMI can happen after the if() but before halt and the same situation
>>>>>>> we are trying to prevent with IRQs will occur.
>>>>>>
>>>>>> True, we can not fix that. I thought to fix the inconsistency of
>>>>>> lock,want pair.
>>>>>> But NMI could happen after the first OR condition also.
>>>>>> /me thinks again
>>>>>>
>>>>> lock_spinning() can check that it is called in nmi context and bail out.
>>>>
>>>> Good point.
>>>> I think we can check for even irq context and bailout so that in irq
>>>> context we continue spinning instead of slowpath. no ?
>>>>
>>> That will happen much more often and irq context is no a problem anyway.
>>>
>>
>> Yes. It is not a problem. But my idea was to not to enter slowpath lock
>> during irq processing. Do you think that is a good idea?
>>
> Why would we disable it if its purpose is to improve handling of
> contended locks? NMI is only special because it is impossible to handle
> and should not happen anyway.
>
Yes. agreed. indeed I saw degradation if we allow the slowpath spinlock
to loop again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists