[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <67ff4a7b7f5c920c370efc11e7190a61a075ec1b.camel@redhat.com>
Date: Sun, 22 May 2022 11:24:59 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sean Christopherson <seanjc@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
"H. Peter Anvin" <hpa@...or.com>, Borislav Petkov <bp@...en8.de>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org,
Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Subject: Re: [PATCH] KVM: x86: SVM: fix nested PAUSE filtering
On Fri, 2022-05-20 at 16:05 +0200, Paolo Bonzini wrote:
> On 5/18/22 09:27, Maxim Levitsky wrote:
> > To fix this, change the fallback strategy - ignore the guest threshold
> > values, but use/update the host threshold values, instead of using zeros.
>
> Hmm, now I remember why it was using the guest values. It's because, if
> the L1 hypervisor specifies COUNT=0 or does not have filtering enabled,
> we need to obey and inject a vmexit on every PAUSE. So something like this:
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index f209c1ca540c..e6153fd3ae47 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -616,6 +616,8 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
> struct kvm_vcpu *vcpu = &svm->vcpu;
> struct vmcb *vmcb01 = svm->vmcb01.ptr;
> struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
> + u32 pause_count12;
> + u32 pause_thresh12;
>
> /*
> * Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
> @@ -671,20 +673,25 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
> if (!nested_vmcb_needs_vls_intercept(svm))
> vmcb02->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
>
> + pause_count12 = svm->pause_filter_enabled ? svm->nested.ctl.pause_filter_count : 0;
> + pause_thresh12 = svm->pause_threshold_enabled ? svm->nested.ctl.pause_filter_thresh : 0;
> if (kvm_pause_in_guest(svm->vcpu.kvm)) {
> - /* use guest values since host doesn't use them */
> - vmcb02->control.pause_filter_count =
> - svm->pause_filter_enabled ?
> - svm->nested.ctl.pause_filter_count : 0;
> -
> - vmcb02->control.pause_filter_thresh =
> - svm->pause_threshold_enabled ?
> - svm->nested.ctl.pause_filter_thresh : 0;
> + /* use guest values since host doesn't intercept PAUSE */
> + vmcb02->control.pause_filter_count = pause_count12;
> + vmcb02->control.pause_filter_thresh = pause_thresh12;
>
> } else {
> - /* use host values otherwise */
> + /* start from host values otherwise */
> vmcb02->control.pause_filter_count = vmcb01->control.pause_filter_count;
> vmcb02->control.pause_filter_thresh = vmcb01->control.pause_filter_thresh;
> +
> + /* ... but ensure filtering is disabled if so requested. */
> + if (vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_PAUSE)) {
> + if (!pause_count12)
> + vmcb02->control.pause_filter_count = 0;
> + if (!pause_thresh12)
> + vmcb02->control.pause_filter_thresh = 0;
> + }
Makes sense!
I also need to remember to return the '!old' check to the shrink_ple_window,
otherwise it will once again convert 0 to the minimum value.
I'll send a patch soon with this.
Thanks!
> }
>
> nested_svm_transition_tlb_flush(vcpu);
>
>
> What do you think?
>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists