lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Nov 2020 16:10:26 +0800
From:   Shenming Lu <lushenming@...wei.com>
To:     Marc Zyngier <maz@...nel.org>
CC:     James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Eric Auger <eric.auger@...hat.com>,
        <linux-arm-kernel@...ts.infradead.org>,
        <kvmarm@...ts.cs.columbia.edu>, <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>,
        Christoffer Dall <christoffer.dall@....com>,
        Alex Williamson <alex.williamson@...hat.com>,
        Kirti Wankhede <kwankhede@...dia.com>,
        Cornelia Huck <cohuck@...hat.com>, Neo Jia <cjia@...dia.com>,
        <wanghaibin.wang@...wei.com>, <yuzenghui@...wei.com>
Subject: Re: [RFC PATCH v1 3/4] KVM: arm64: GICv4.1: Restore VLPI's pending
 state to physical side

On 2020/11/23 17:27, Marc Zyngier wrote:
> On 2020-11-23 06:54, Shenming Lu wrote:
>> From: Zenghui Yu <yuzenghui@...wei.com>
>>
>> When setting the forwarding path of a VLPI, it is more consistent to
> 
> I'm not sure it is more consistent. It is a *new* behaviour, because it only
> matters for migration, which has been so far unsupported.

Alright, consistent may not be accurate...
But I have doubt that whether there is really no need to transfer the pending states
from kvm'vgic to VPT in set_forwarding regardless of migration, and the similar
for unset_forwarding.

> 
>> also transfer the pending state from irq->pending_latch to VPT (especially
>> in migration, the pending states of VLPIs are restored into kvm’s vgic
>> first). And we currently send "INT+VSYNC" to trigger a VLPI to pending.
>>
>> Signed-off-by: Zenghui Yu <yuzenghui@...wei.com>
>> Signed-off-by: Shenming Lu <lushenming@...wei.com>
>> ---
>>  arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
>>  1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>> index b5fa73c9fd35..cc3ab9cea182 100644
>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>> @@ -418,6 +418,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>>      irq->host_irq    = virq;
>>      atomic_inc(&map.vpe->vlpi_count);
>>
>> +    /* Transfer pending state */
>> +    ret = irq_set_irqchip_state(irq->host_irq,
>> +                    IRQCHIP_STATE_PENDING,
>> +                    irq->pending_latch);
>> +    WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>> +
>> +    /*
>> +     * Let it be pruned from ap_list later and don't bother
>> +     * the List Register.
>> +     */
>> +    irq->pending_latch = false;
> 
> It occurs to me that calling into irq_set_irqchip_state() for a large
> number of interrupts can take a significant amount of time. It is also
> odd that you dump the VPT with the VPE unmapped, but rely on the VPE
> being mapped for the opposite operation.
> 
> Shouldn't these be symmetric, all performed while the VPE is unmapped?
> It would also save a lot of ITS traffic.
> 

My thought was to use the existing interface directly without unmapping...

If you want to unmap the vPE and poke the VPT here, as I said in the cover
letter, set/unset_forwarding might also be called when all devices are running
at normal run time, in which case the unmapping of the vPE is not allowed...

Another possible solution is to add a new dedicated interface to QEMU to transfer
these pending states to HW in GIC VM state change handler corresponding to
save_pending_tables?

>> +
>>  out:
>>      mutex_unlock(&its->its_lock);
>>      return ret;
> 
> Thanks,
> 
>         M.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ