[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b52c3e6a5df217b529565eae6e15c3fe246e2c5d.camel@redhat.com>
Date: Thu, 05 Oct 2023 15:59:43 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Joerg Roedel <joro@...tes.org>
Cc: kvm@...r.kernel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/10] KVM: SVM: Rename "avic_physical_id_cache" to
"avic_physical_id_entry"
У вт, 2023-08-15 у 14:35 -0700, Sean Christopherson пише:
> Rename the vCPU's pointer to its AVIC Physical ID entry from "cache" to
> "entry". While the field technically caches the result of the pointer
> calculation, it's all too easy to misinterpret the name and think that
> the field somehow caches the _data_ in the table.
I also strongly dislike the 'avic_physical_id_cache', but if you are refactoring
it, IMHO the 'avic_physical_id_entry' is just as confusing since its a pointer
to an entry and not the entry itself.
At least a comment to explain where this pointer points, or maybe (not sure)
drop the avic_physical_id_cache completely and calculate it every time
(I doubt that there is any perf loss due to this)
Best regards,
Maxim Levitsky
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/svm/avic.c | 10 +++++-----
> arch/x86/kvm/svm/svm.h | 2 +-
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
> index 6803e2d7bc22..8d162ff83aa8 100644
> --- a/arch/x86/kvm/svm/avic.c
> +++ b/arch/x86/kvm/svm/avic.c
> @@ -310,7 +310,7 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
> AVIC_PHYSICAL_ID_ENTRY_VALID_MASK;
> WRITE_ONCE(table[id], new_entry);
>
> - svm->avic_physical_id_cache = &table[id];
> + svm->avic_physical_id_entry = &table[id];
>
> return 0;
> }
> @@ -1028,14 +1028,14 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> if (kvm_vcpu_is_blocking(vcpu))
> return;
>
> - entry = READ_ONCE(*(svm->avic_physical_id_cache));
> + entry = READ_ONCE(*(svm->avic_physical_id_entry));
> WARN_ON_ONCE(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
>
> entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
> entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
> entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
>
> - WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
> + WRITE_ONCE(*(svm->avic_physical_id_entry), entry);
> avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true);
> }
>
> @@ -1046,7 +1046,7 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)
>
> lockdep_assert_preemption_disabled();
>
> - entry = READ_ONCE(*(svm->avic_physical_id_cache));
> + entry = READ_ONCE(*(svm->avic_physical_id_entry));
>
> /* Nothing to do if IsRunning == '0' due to vCPU blocking. */
> if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK))
> @@ -1055,7 +1055,7 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)
> avic_update_iommu_vcpu_affinity(vcpu, -1, 0);
>
> entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
> - WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
> + WRITE_ONCE(*(svm->avic_physical_id_entry), entry);
> }
>
> void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 8b798982e5d0..4362048493d1 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -261,7 +261,7 @@ struct vcpu_svm {
>
> u32 ldr_reg;
> u32 dfr_reg;
> - u64 *avic_physical_id_cache;
> + u64 *avic_physical_id_entry;
>
> /*
> * Per-vcpu list of struct amd_svm_iommu_ir:
Powered by blists - more mailing lists