[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAdmWtPWsS0tHf29@Asmaa.>
Date: Tue, 22 Apr 2025 02:50:18 -0700
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Rik van Riel <riel@...riel.com>,
Tom Lendacky <thomas.lendacky@....com>, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 09/24] KVM: SEV: Generalize tracking ASID->vCPU with
xarrays
On Thu, Apr 03, 2025 at 04:05:12PM -0400, Maxim Levitsky wrote:
> On Wed, 2025-03-26 at 19:36 +0000, Yosry Ahmed wrote:
> > Following changes will track ASID to vCPU mappings for all ASIDs, not
> > just SEV ASIDs. Using per-CPU arrays with the maximum possible number of
> > ASIDs would be too expensive.
>
> Maybe add a word or two to explain that currently # of SEV ASIDS is small
> but # of all ASIDS is relatively large (like 16 bit number or so)?
Good idea.
> > @@ -1573,13 +1567,13 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > if (sev_guest(vcpu->kvm)) {
> > /*
> > * Flush the TLB when a different vCPU using the same ASID is
> > - * run on the same CPU.
> > + * run on the same CPU. xa_store() should always succeed because
> > + * the entry is reserved when the ASID is allocated.
> > */
> > asid = sev_get_asid(vcpu->kvm);
> > - if (sd->sev_vcpus[asid] != vcpu) {
> > - sd->sev_vcpus[asid] = vcpu;
> > + prev = xa_store(&sd->asid_vcpu, asid, vcpu, GFP_ATOMIC);
> > + if (prev != vcpu || WARN_ON_ONCE(xa_err(prev)))
>
> Tiny nitpick: I would have prefered to have WARN_ON_ONCE(xa_err(prev) first in the above condition,
> because in theory we shouldn't use a value before we know its not an error,
> but in practice this doesn't really matter.
I think it's fine because we are just comparing 'prev' to the vCPU
pointer we have, we are not dereferencing it. So it should be safe. I'd
rather only check the error condition last because it shouldn't ever
happen.
> > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > index 3ab2a424992c1..4929b96d3d700 100644
> > --- a/arch/x86/kvm/svm/svm.h
> > +++ b/arch/x86/kvm/svm/svm.h
> > @@ -340,8 +340,7 @@ struct svm_cpu_data {
> >
> > struct vmcb *current_vmcb;
> >
> > - /* index = sev_asid, value = vcpu pointer */
> Maybe keep the above comment?
I think it's kinda pointless tbh because it's obvious from how the
xarray is used, but I am fine with keeping it if others agree it's
useful.
>
> > - struct kvm_vcpu **sev_vcpus;
> > + struct xarray asid_vcpu;
> > };
> >
> > DECLARE_PER_CPU(struct svm_cpu_data, svm_data);
> > @@ -655,6 +654,8 @@ void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr,
> > void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable);
> > void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
> > int trig_mode, int vec);
> > +bool svm_register_asid(unsigned int asid);
> > +void svm_unregister_asid(unsigned int asid);
> >
> > /* nested.c */
> >
> > @@ -793,7 +794,6 @@ void sev_vm_destroy(struct kvm *kvm);
> > void __init sev_set_cpu_caps(void);
> > void __init sev_hardware_setup(void);
> > void sev_hardware_unsetup(void);
> > -int sev_cpu_init(struct svm_cpu_data *sd);
> > int sev_dev_get_attr(u32 group, u64 attr, u64 *val);
> > extern unsigned int max_sev_asid;
> > void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code);
> > @@ -817,7 +817,6 @@ static inline void sev_vm_destroy(struct kvm *kvm) {}
> > static inline void __init sev_set_cpu_caps(void) {}
> > static inline void __init sev_hardware_setup(void) {}
> > static inline void sev_hardware_unsetup(void) {}
> > -static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
> > static inline int sev_dev_get_attr(u32 group, u64 attr, u64 *val) { return -ENXIO; }
> > #define max_sev_asid 0
> > static inline void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) {}
>
>
> Overall looks good to me.
>
> Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
Thanks!
>
> Best regards,
> Maxim Levitsky
>
>
>
Powered by blists - more mailing lists