[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a8d8a373b41dbc72d007f60e83dcb7ee596a5ad5.camel@redhat.com>
Date: Fri, 28 Feb 2025 21:13:18 -0500
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>, Sean Christopherson
<seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 10/13] KVM: nSVM: Flush the TLB if L1 changes L2's
ASID
On Wed, 2025-02-05 at 18:23 +0000, Yosry Ahmed wrote:
> KVM tracks a single ASID for L2 guests. L1 could change the ASID it has
> assigned L2 due to switching to a different L2 guest or simply to avoid
> flushing L2's existing ASID. Flush L2's TLB when this happens to avoid
> reusing TLB entries from the old ASID (from L1's perspective).
>
> Remove the comment in __nested_copy_vmcb_control_to_cache() about the
> cached ASID usage, as this changes makes it stale by adding another
> usage.
>
> This is heavily inspired by nVMX's handling of last_vpid.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
> ---
> arch/x86/kvm/svm/nested.c | 5 ++++-
> arch/x86/kvm/svm/svm.h | 2 ++
> 2 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index e2c59eb2907e8..12bb391884299 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -368,7 +368,6 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
> to->pause_filter_count = from->pause_filter_count;
> to->pause_filter_thresh = from->pause_filter_thresh;
>
> - /* Copy asid here because nested_vmcb_check_controls will check it. */
> to->asid = from->asid;
> to->msrpm_base_pa &= ~0x0fffULL;
> to->iopm_base_pa &= ~0x0fffULL;
> @@ -509,6 +508,10 @@ static void nested_svm_entry_tlb_flush(struct kvm_vcpu *vcpu)
> kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
> }
>
> + if (svm->nested.ctl.asid != svm->nested.last_asid) {
> + svm->nested.last_asid = svm->nested.ctl.asid;
> + kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
> + }
> /*
> * TODO: optimize unconditional TLB flush/MMU sync. A partial list of
> * things to fix before this can be conditional:
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 6a73d6ed1e428..f2352135b99d3 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -211,6 +211,8 @@ struct svm_nested_state {
> * on its side.
> */
> bool force_msr_bitmap_recalc;
> +
> + u32 last_asid;
> };
>
> struct vcpu_sev_es_state {
I can't be 100% sure but overall the patch looks correct to me.
Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists