[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dda34de7-703b-4d9c-8666-c1a195327f32@redhat.com>
Date: Wed, 30 Apr 2025 15:54:08 +1000
From: Gavin Shan <gshan@...hat.com>
To: Steven Price <steven.price@....com>, kvm@...r.kernel.org,
kvmarm@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>, Marc Zyngier <maz@...nel.org>,
Will Deacon <will@...nel.org>, James Morse <james.morse@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu
<yuzenghui@...wei.com>, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Joey Gouly <joey.gouly@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Christoffer Dall <christoffer.dall@....com>, Fuad Tabba <tabba@...gle.com>,
linux-coco@...ts.linux.dev,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
Shanker Donthineni <sdonthineni@...dia.com>, Alper Gun
<alpergun@...gle.com>, "Aneesh Kumar K . V" <aneesh.kumar@...nel.org>
Subject: Re: [PATCH v8 12/43] KVM: arm64: vgic: Provide helper for number of
list registers
On 4/16/25 11:41 PM, Steven Price wrote:
> Currently the number of list registers available is stored in a global
> (kvm_vgic_global_state.nr_lr). With Arm CCA the RMM is permitted to
> reserve list registers for its own use and so the number of available
> list registers can be fewer for a realm VM. Provide a wrapper function
> to fetch the global in preparation for restricting nr_lr when dealing
> with a realm VM.
>
> Signed-off-by: Steven Price <steven.price@....com>
> ---
> New patch for v6
> ---
> arch/arm64/kvm/vgic/vgic.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
With below nitpick addressed:
Reviewed-by: Gavin Shan <gshan@...hat.com>
> diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
> index 8f8096d48925..8d189ce18ea0 100644
> --- a/arch/arm64/kvm/vgic/vgic.c
> +++ b/arch/arm64/kvm/vgic/vgic.c
> @@ -21,6 +21,11 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
> .gicv3_cpuif = STATIC_KEY_FALSE_INIT,
> };
>
> +static inline int kvm_vcpu_vgic_nr_lr(struct kvm_vcpu *vcpu)
> +{
> + return kvm_vgic_global_state.nr_lr;
> +}
> +
> /*
> * Locking order is always:
> * kvm->lock (mutex)
> @@ -802,7 +807,7 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
> lockdep_assert_held(&vgic_cpu->ap_list_lock);
>
> count = compute_ap_list_depth(vcpu, &multi_sgi);
> - if (count > kvm_vgic_global_state.nr_lr || multi_sgi)
> + if (count > kvm_vcpu_vgic_nr_lr(vcpu) || multi_sgi)
> vgic_sort_ap_list(vcpu);
>
> count = 0;
> @@ -831,7 +836,7 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
>
> raw_spin_unlock(&irq->irq_lock);
>
> - if (count == kvm_vgic_global_state.nr_lr) {
> + if (count == kvm_vcpu_vgic_nr_lr(vcpu)) {
> if (!list_is_last(&irq->ap_list,
> &vgic_cpu->ap_list_head))
> vgic_set_underflow(vcpu);
> @@ -840,7 +845,7 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
> }
>
> /* Nuke remaining LRs */
> - for (i = count ; i < kvm_vgic_global_state.nr_lr; i++)
> + for (i = count ; i < kvm_vcpu_vgic_nr_lr(vcpu); i++)
> vgic_clear_lr(vcpu, i);
>
The unnecessary space before the semicolon can be dropped.
for (i = count; i < kvm_vcpu_vgic_nr_lr(vcpu); i++)
> if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif))
Thanks,
Gavin
Powered by blists - more mailing lists