[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ilx3j58x.fsf@vitty.brq.redhat.com>
Date: Mon, 08 Nov 2021 11:45:02 +0100
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Kele Huang <huangkele@...edance.com>, pbonzini@...hat.com
Cc: chaiwen.cc@...edance.com, xieyongji@...edance.com,
dengliang.1214@...edance.com, pizhenwei@...edance.com,
wanpengli@...cent.com, seanjc@...gle.com, huangkele@...edance.com,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC] KVM: x86: SVM: don't expose PV_SEND_IPI feature with AVIC
Kele Huang <huangkele@...edance.com> writes:
> Currently, AVIC is disabled if x2apic feature is exposed to guest
> or in-kernel PIT is in re-injection mode.
>
> We can enable AVIC with options:
>
> Kmod args:
> modprobe kvm_amd avic=1 nested=0 npt=1
> QEMU args:
> ... -cpu host,-x2apic -global kvm-pit.lost_tick_policy=discard ...
>
> When LAPIC works in xapic mode, both AVIC and PV_SEND_IPI feature
> can accelerate IPI operations for guest. However, the relationship
> between AVIC and PV_SEND_IPI feature is not sorted out.
>
> In logical, AVIC accelerates most of frequently IPI operations
> without VMM intervention, while the re-hooking of apic->send_IPI_xxx
> from PV_SEND_IPI feature masks out it. People can get confused
> if AVIC is enabled while getting lots of hypercall kvm_exits
> from IPI.
>
> In performance, benchmark tool
> https://lore.kernel.org/kvm/20171219085010.4081-1-ynorov@caviumnetworks.com/
> shows below results:
>
> Test env:
> CPU: AMD EPYC 7742 64-Core Processor
> 2 vCPUs pinned 1:1
> idle=poll
>
> Test result (average ns per IPI of lots of running):
> PV_SEND_IPI : 1860
> AVIC : 1390
>
> Besides, disscussions in https://lkml.org/lkml/2021/10/20/423
> do have some solid performance test results to this.
>
> This patch fixes this by masking out PV_SEND_IPI feature when
> AVIC is enabled in setting up of guest vCPUs' CPUID.
>
> Signed-off-by: Kele Huang <huangkele@...edance.com>
> ---
> arch/x86/kvm/cpuid.c | 4 ++--
> arch/x86/kvm/svm/svm.c | 13 +++++++++++++
> 2 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 2d70edb0f323..cc22975e2ac5 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -194,8 +194,6 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> best->ecx |= XFEATURE_MASK_FPSSE;
> }
>
> - kvm_update_pv_runtime(vcpu);
> -
> vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
> vcpu->arch.reserved_gpa_bits = kvm_vcpu_reserved_gpa_bits_raw(vcpu);
>
> @@ -208,6 +206,8 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> /* Invoke the vendor callback only after the above state is updated. */
> static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu);
>
> + kvm_update_pv_runtime(vcpu);
> +
> /*
> * Except for the MMU, which needs to do its thing any vendor specific
> * adjustments to the reserved GPA bits.
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index b36ca4e476c2..b13bcfb2617c 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4114,6 +4114,19 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> if (nested && guest_cpuid_has(vcpu, X86_FEATURE_SVM))
> kvm_request_apicv_update(vcpu->kvm, false,
> APICV_INHIBIT_REASON_NESTED);
> +
> + if (!guest_cpuid_has(vcpu, X86_FEATURE_X2APIC) &&
> + !(nested && guest_cpuid_has(vcpu, X86_FEATURE_SVM))) {
> + /*
> + * PV_SEND_IPI feature masks out AVIC acceleration to IPI.
> + * So, we do not expose PV_SEND_IPI feature to guest when
> + * AVIC is enabled.
> + */
> + best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
> + if (best && enable_apicv &&
> + (best->eax & (1 << KVM_FEATURE_PV_SEND_IPI)))
> + best->eax &= ~(1 << KVM_FEATURE_PV_SEND_IPI);
> + }
Personally, I'd prefer this to be done in userspace (e.g. QEMU) as with
this patch it becomes very un-obvious why in certain cases some feature
bits are missing. This also breaks migration from KVM pre-patch to KVM
post-patch with e.g. KVM_CAP_ENFORCE_PV_FEATURE_CPUID: the feature will
just disappear from underneath the guest.
What we don't have in KVM is something like KVM_GET_RECOMMENDED_CPUID
at least for KVM PV/Hyper-V features. We could've made it easier for
userspace to make 'default' decisions.
> }
> init_vmcb_after_set_cpuid(vcpu);
> }
--
Vitaly
Powered by blists - more mailing lists