[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <aa24b1ac-8ab6-1aa6-f864-e52b34f042d3@de.ibm.com>
Date: Fri, 30 Sep 2016 12:44:39 +0200
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Pan Xinhui <xinhui@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
virtualization@...ts.linux-foundation.org,
linux-s390@...r.kernel.org, xen-devel-request@...ts.xenproject.org,
kvm@...r.kernel.org, benh@...nel.crashing.org, paulus@...ba.org,
mpe@...erman.id.au, mingo@...hat.com, paulmck@...ux.vnet.ibm.com,
will deacon <will.deacon@....com>, kernellwp@...il.com,
jgross@...e.com, bsingharora@...il.com,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v3 0/4] implement vcpu preempted check
On 09/30/2016 08:58 AM, Paolo Bonzini wrote:
>>>>> Please consider s390 and (x86/arm) KVM. Once we have a few, more can
>>>>> follow later, but I think its important to not only have PPC support for
>>>>> this.
>>>>
>>>> Actually the s390 preemted check via sigp sense running is available for
>>>> all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
>>>> no longer buy s390 systems without LPAR.
>>>>
>>>> As Heiko already pointed out we could simply use a small inline function
>>>> that calls cpu_is_preempted from arch/s390/lib/spinlock (or
>>>> smp_vcpu_scheduled from smp.c)
>>>
>>> Sure, and I had vague memories of Heiko's email. This patch set however
>>> completely fails to do that trivial hooking up.
>>
>> sorry for that.
>> I will try to work it out on x86.
>
> x86 has no hypervisor support, and I'd like to understand the desired
> semantics first, so I don't think it should block this series. In
> particular, there are at least the following choices:
I think the semantics can be slightly different for different architectures
after all it is still a heuristics to improve performance.
>
> 1) exit to userspace (5-10.000 clock cycles best case) counts as
> lock holder preemption
>
> 2) any time the vCPU thread not running counts as lock holder
> preemption
>
> To implement the latter you'd need a hypercall or MSR (at least as
> a slow path), because the KVM preempt notifier is only active
> during the KVM_RUN ioctl.
FWIW, The s390 implementation uses kvm_arch_vcpu_put/load as trigger
points for (un)setting the CPUSTAT_RUNNING. Strictly speaking an exit to
userspace is not preempted, But as KVM has no control if we are being
scheduled out when in QEMU this is the compromise that seems to work quite
well for the s390 spinlock code (which checks the running state before
doing a yield hypercall).
In addition an exit to QEMU is really a rare case.
Powered by blists - more mailing lists