[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <caabadb6-fe62-aaf7-260f-79f230d03a1c@linux.vnet.ibm.com>
Date: Fri, 30 Sep 2016 16:52:57 +0800
From: Pan Xinhui <xinhui@...ux.vnet.ibm.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Christian Borntraeger <borntraeger@...ibm.com>,
Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
virtualization@...ts.linux-foundation.org,
linux-s390@...r.kernel.org, xen-devel-request@...ts.xenproject.org,
kvm@...r.kernel.org, benh@...nel.crashing.org, paulus@...ba.org,
mpe@...erman.id.au, mingo@...hat.com, paulmck@...ux.vnet.ibm.com,
will deacon <will.deacon@....com>, kernellwp@...il.com,
jgross@...e.com, bsingharora@...il.com,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v3 0/4] implement vcpu preempted check
hi, Paolo
thanks for your reply.
在 2016/9/30 14:58, Paolo Bonzini 写道:
>>>>> Please consider s390 and (x86/arm) KVM. Once we have a few, more can
>>>>> follow later, but I think its important to not only have PPC support for
>>>>> this.
>>>>
>>>> Actually the s390 preemted check via sigp sense running is available for
>>>> all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
>>>> no longer buy s390 systems without LPAR.
>>>>
>>>> As Heiko already pointed out we could simply use a small inline function
>>>> that calls cpu_is_preempted from arch/s390/lib/spinlock (or
>>>> smp_vcpu_scheduled from smp.c)
>>>
>>> Sure, and I had vague memories of Heiko's email. This patch set however
>>> completely fails to do that trivial hooking up.
>>
>> sorry for that.
>> I will try to work it out on x86.
>
> x86 has no hypervisor support, and I'd like to understand the desired
> semantics first, so I don't think it should block this series. In
Once a guest do a hypercall or something similar, IOW, there is a kvm_guest_exit. we think this is a lock holder preemption.
Adn PPC implement it in this way.
> particular, there are at least the following choices:
>
> 1) exit to userspace (5-10.000 clock cycles best case) counts as
> lock holder preemption
>
just to avoid any misunderstanding.
You are saying that the guest does an IO operation for example and then exit to QEMU right?
Yes, in this scenario it's hard to guarntee that such IO operation or someghing like that could be finished in time.
> 2) any time the vCPU thread not running counts as lock holder
> preemption
>
> To implement the latter you'd need a hypercall or MSR (at least as
> a slow path), because the KVM preempt notifier is only active
> during the KVM_RUN ioctl.
>
seems a little expensive. :(
How many clock cycles it might cost.
I am still looking for one shared struct between kvm and guest kernel on x86.
and every time kvm_guest_exit/enter called, we store some info in it. So guest kernel can check one vcpu is running or not quickly.
thanks
xinhui
> Paolo
>
Powered by blists - more mailing lists