lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Jan 2021 10:12:50 +0800
From:   Shenming Lu <lushenming@...wei.com>
To:     Marc Zyngier <maz@...nel.org>
CC:     Eric Auger <eric.auger@...hat.com>, Will Deacon <will@...nel.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        <kvmarm@...ts.cs.columbia.edu>, <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>,
        Alex Williamson <alex.williamson@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>,
        "Lorenzo Pieralisi" <lorenzo.pieralisi@....com>,
        <wanghaibin.wang@...wei.com>, <yuzenghui@...wei.com>
Subject: Re: [RFC PATCH v2 3/4] KVM: arm64: GICv4.1: Restore VLPI's pending
 state to physical side

On 2021/1/5 17:25, Marc Zyngier wrote:
> On 2021-01-04 08:16, Shenming Lu wrote:
>> From: Zenghui Yu <yuzenghui@...wei.com>
>>
>> When setting the forwarding path of a VLPI (switch to the HW mode),
>> we could also transfer the pending state from irq->pending_latch to
>> VPT (especially in migration, the pending states of VLPIs are restored
>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>> a VLPI to pending.
>>
>> Signed-off-by: Zenghui Yu <yuzenghui@...wei.com>
>> Signed-off-by: Shenming Lu <lushenming@...wei.com>
>> ---
>>  arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
>>  1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>> index f211a7c32704..7945d6d09cdd 100644
>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>> @@ -454,6 +454,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>>      irq->host_irq    = virq;
>>      atomic_inc(&map.vpe->vlpi_count);
>>
>> +    /* Transfer pending state */
>> +    ret = irq_set_irqchip_state(irq->host_irq,
>> +                    IRQCHIP_STATE_PENDING,
>> +                    irq->pending_latch);
>> +    WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
> 
> Why do this if pending_latch is 0, which is likely to be
> the overwhelming case?

Yes, there is no need to do this if pending_latch is 0.

> 
>> +
>> +    /*
>> +     * Let it be pruned from ap_list later and don't bother
>> +     * the List Register.
>> +     */
>> +    irq->pending_latch = false;
> 
> What guarantees the pruning? Pruning only happens on vcpu exit,
> which means we may have the same interrupt via both the LR and
> the stream interface, which I don't believe is legal (it is
> like having two LRs holding the same interrupt).

Since the irq's pending_latch is set to false here, it will not be
populated to the LR in vgic_flush_lr_state() (vgic_target_oracle()
will return NULL).

> 
>> +
>>  out:
>>      mutex_unlock(&its->its_lock);
>>      return ret;
> 
> Thanks,
> 
>         M.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ