[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <683134bdea8a22d3bb784117dcfe17a1@kernel.org>
Date: Thu, 31 Dec 2020 08:57:26 +0000
From: Marc Zyngier <maz@...nel.org>
To: Shenming Lu <lushenming@...wei.com>
Cc: Will Deacon <will@...nel.org>, Eric Auger <eric.auger@...hat.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, wanghaibin.wang@...wei.com,
yuzenghui@...wei.com
Subject: Re: [PATCH RFC] KVM: arm64: vgic: Decouple the check of the
EnableLPIs bit from the ITS LPI translation
Hi Shemming,
On 2020-12-31 06:28, Shenming Lu wrote:
> When the EnableLPIs bit is set to 0, any ITS LPI requests in the
> Redistributor would be ignored. And this check is independent from
> the ITS LPI translation. So it might be better to move the check
> of the EnableLPIs bit out of the LPI resolving, and also add it
> to the path that uses the translation cache.
But by doing that, you are moving the overhead of checking for
EnableLPIs from the slow path (translation walk) to the fast
path (cache hit), which seems counter-productive.
> Besides it seems that
> by this the invalidating of the translation cache caused by the LPI
> disabling is unnecessary.
>
> Not sure if I have missed something... Thanks.
I am certainly missing the purpose of this patch.
The effect of EnableLPIs being zero is to drop the result of any
translation (a new pending bit) on the floor. Given that, it is
immaterial whether this causes a new translation or hits in the
cache, as the result is still to not pend a new interrupt.
I get the feeling that you are trying to optimise for the unusual
case where EnableLPIs is 0 *and* you have a screaming device
injecting tons of interrupt. If that is the case, I don't think
this is worth it.
Thanks,
M.
>
> Signed-off-by: Shenming Lu <lushenming@...wei.com>
> ---
> arch/arm64/kvm/vgic/vgic-its.c | 9 +++++----
> arch/arm64/kvm/vgic/vgic-mmio-v3.c | 4 +---
> 2 files changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c
> b/arch/arm64/kvm/vgic/vgic-its.c
> index 40cbaca81333..f53446bc154e 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -683,9 +683,6 @@ int vgic_its_resolve_lpi(struct kvm *kvm, struct
> vgic_its *its,
> if (!vcpu)
> return E_ITS_INT_UNMAPPED_INTERRUPT;
>
> - if (!vcpu->arch.vgic_cpu.lpis_enabled)
> - return -EBUSY;
> -
> vgic_its_cache_translation(kvm, its, devid, eventid, ite->irq);
>
> *irq = ite->irq;
> @@ -738,6 +735,9 @@ static int vgic_its_trigger_msi(struct kvm *kvm,
> struct vgic_its *its,
> if (err)
> return err;
>
> + if (!irq->target_vcpu->arch.vgic_cpu.lpis_enabled)
> + return -EBUSY;
> +
> if (irq->hw)
> return irq_set_irqchip_state(irq->host_irq,
> IRQCHIP_STATE_PENDING, true);
> @@ -757,7 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm
> *kvm, struct kvm_msi *msi)
>
> db = (u64)msi->address_hi << 32 | msi->address_lo;
> irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> - if (!irq)
> +
> + if (!irq || !irq->target_vcpu->arch.vgic_cpu.lpis_enabled)
> return -EWOULDBLOCK;
>
> raw_spin_lock_irqsave(&irq->irq_lock, flags);
> diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> index 15a6c98ee92f..7b0749f7660d 100644
> --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> @@ -242,10 +242,8 @@ static void vgic_mmio_write_v3r_ctlr(struct
> kvm_vcpu *vcpu,
>
> vgic_cpu->lpis_enabled = val & GICR_CTLR_ENABLE_LPIS;
>
> - if (was_enabled && !vgic_cpu->lpis_enabled) {
> + if (was_enabled && !vgic_cpu->lpis_enabled)
> vgic_flush_pending_lpis(vcpu);
> - vgic_its_invalidate_cache(vcpu->kvm);
> - }
>
> if (!was_enabled && vgic_cpu->lpis_enabled)
> vgic_enable_lpis(vcpu);
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists