[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <kb7nwrco6s7e6catcareyic72pxvx52jbqbfc5gbqb5zu434kg@w3rrzbut3h34>
Date: Thu, 17 Jul 2025 13:44:48 +0800
From: Yao Yuan <yaoyuan@...ux.alibaba.com>
To: Keir Fraser <keirf@...gle.com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Sean Christopherson <seanjc@...gle.com>,
Eric Auger <eric.auger@...hat.com>, Oliver Upton <oliver.upton@...ux.dev>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2 2/4] KVM: arm64: vgic: Explicitly implement
vgic_dist::ready ordering
On Wed, Jul 16, 2025 at 11:07:35AM +0800, Keir Fraser wrote:
> In preparation to remove synchronize_srcu() from MMIO registration,
> remove the distributor's dependency on this implicit barrier by
> direct acquire-release synchronization on the flag write and its
> lock-free check.
>
> Signed-off-by: Keir Fraser <keirf@...gle.com>
> ---
> arch/arm64/kvm/vgic/vgic-init.c | 11 ++---------
> 1 file changed, 2 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
> index 502b65049703..bc83672e461b 100644
> --- a/arch/arm64/kvm/vgic/vgic-init.c
> +++ b/arch/arm64/kvm/vgic/vgic-init.c
> @@ -567,7 +567,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
> gpa_t dist_base;
> int ret = 0;
>
> - if (likely(dist->ready))
> + if (likely(smp_load_acquire(&dist->ready)))
> return 0;
>
> mutex_lock(&kvm->slots_lock);
> @@ -598,14 +598,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
> goto out_slots;
> }
>
> - /*
> - * kvm_io_bus_register_dev() guarantees all readers see the new MMIO
> - * registration before returning through synchronize_srcu(), which also
> - * implies a full memory barrier. As such, marking the distributor as
> - * 'ready' here is guaranteed to be ordered after all vCPUs having seen
> - * a completely configured distributor.
> - */
> - dist->ready = true;
> + smp_store_release(&dist->ready, true);
No need the store-release and load-acquire for replacing
synchronize_srcu_expedited() w/ call_srcu() IIUC:
Tree SRCU on SMP:
call_srcu()
__call_srcu()
srcu_gp_start_if_needed()
__srcu_read_unlock_nmisafe()
#ifdef CONFIG_NEED_SRCU_NMI_SAFE
smp_mb__before_atomic() // __smp_mb() on ARM64, do nothing on x86.
#else
__srcu_read_unlock()
smp_mb()
#endif
TINY SRCY on UP:
Should have no memory ordering issue on UP.
> goto out_slots;
> out:
> mutex_unlock(&kvm->arch.config_lock);
> --
> 2.50.0.727.gbf7dc18ff4-goog
>
Powered by blists - more mailing lists