[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51EFC1D4.9060800@linux.vnet.ibm.com>
Date: Wed, 24 Jul 2013 17:30:20 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Gleb Natapov <gleb@...hat.com>
CC: mingo@...hat.com, jeremy@...p.org, x86@...nel.org,
konrad.wilk@...cle.com, hpa@...or.com, pbonzini@...hat.com,
linux-doc@...r.kernel.org, habanero@...ux.vnet.ibm.com,
xen-devel@...ts.xensource.com, peterz@...radead.org,
mtosatti@...hat.com, stefano.stabellini@...citrix.com,
andi@...stfloor.org, attilio.rao@...rix.com, ouyang@...pitt.edu,
gregkh@...e.de, agraf@...e.de, chegu_vinod@...com,
torvalds@...ux-foundation.org, avi.kivity@...il.com,
tglx@...utronix.de, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, riel@...hat.com, drjones@...hat.com,
virtualization@...ts.linux-foundation.org,
srivatsa.vaddagiri@...il.com
Subject: Re: [PATCH RFC V11 15/18] kvm : Paravirtual ticketlocks support for
linux guests running on KVM hypervisor
On 07/24/2013 04:09 PM, Gleb Natapov wrote:
> On Wed, Jul 24, 2013 at 03:15:50PM +0530, Raghavendra K T wrote:
>> On 07/23/2013 08:37 PM, Gleb Natapov wrote:
>>> On Mon, Jul 22, 2013 at 11:50:16AM +0530, Raghavendra K T wrote:
>>>> +static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>> [...]
>>>> +
>>>> + /*
>>>> + * halt until it's our turn and kicked. Note that we do safe halt
>>>> + * for irq enabled case to avoid hang when lock info is overwritten
>>>> + * in irq spinlock slowpath and no spurious interrupt occur to save us.
>>>> + */
>>>> + if (arch_irqs_disabled_flags(flags))
>>>> + halt();
>>>> + else
>>>> + safe_halt();
>>>> +
>>>> +out:
>>> So here now interrupts can be either disabled or enabled. Previous
>>> version disabled interrupts here, so are we sure it is safe to have them
>>> enabled at this point? I do not see any problem yet, will keep thinking.
>>
>> If we enable interrupt here, then
>>
>>
>>>> + cpumask_clear_cpu(cpu, &waiting_cpus);
>>
>> and if we start serving lock for an interrupt that came here,
>> cpumask clear and w->lock=null may not happen atomically.
>> if irq spinlock does not take slow path we would have non null value
>> for lock, but with no information in waitingcpu.
>>
>> I am still thinking what would be problem with that.
>>
> Exactly, for kicker waiting_cpus and w->lock updates are
> non atomic anyway.
>
>>>> + w->lock = NULL;
>>>> + local_irq_restore(flags);
>>>> + spin_time_accum_blocked(start);
>>>> +}
>>>> +PV_CALLEE_SAVE_REGS_THUNK(kvm_lock_spinning);
>>>> +
>>>> +/* Kick vcpu waiting on @lock->head to reach value @ticket */
>>>> +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
>>>> +{
>>>> + int cpu;
>>>> +
>>>> + add_stats(RELEASED_SLOW, 1);
>>>> + for_each_cpu(cpu, &waiting_cpus) {
>>>> + const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
>>>> + if (ACCESS_ONCE(w->lock) == lock &&
>>>> + ACCESS_ONCE(w->want) == ticket) {
>>>> + add_stats(RELEASED_SLOW_KICKED, 1);
>>>> + kvm_kick_cpu(cpu);
>>> What about using NMI to wake sleepers? I think it was discussed, but
>>> forgot why it was dismissed.
>>
>> I think I have missed that discussion. 'll go back and check. so
>> what is the idea here? we can easily wake up the halted vcpus that
>> have interrupt disabled?
> We can of course. IIRC the objection was that NMI handling path is very
> fragile and handling NMI on each wakeup will be more expensive then
> waking up a guest without injecting an event, but it is still interesting
> to see the numbers.
>
Haam, now I remember, We had tried request based mechanism. (new
request like REQ_UNHALT) and process that. It had worked, but had some
complex hacks in vcpu_enter_guest to avoid guest hang in case of
request cleared. So had left it there..
https://lkml.org/lkml/2012/4/30/67
But I do not remember performance impact though.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists