lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Mar 2020 08:59:48 +0100
From:   Auger Eric <eric.auger@...hat.com>
To:     Zenghui Yu <yuzenghui@...wei.com>, Marc Zyngier <maz@...nel.org>
Cc:     linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Jason Cooper <jason@...edaemon.net>,
        Robert Richter <rrichter@...vell.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>
Subject: Re: [PATCH v5 20/23] KVM: arm64: GICv4.1: Plumb SGI implementation
 selection in the distributor

Hi Zenghui,

On 3/20/20 4:08 AM, Zenghui Yu wrote:
> On 2020/3/20 4:38, Auger Eric wrote:
>> Hi Marc,
>> On 3/19/20 1:10 PM, Marc Zyngier wrote:
>>> Hi Zenghui,
>>>
>>> On 2020-03-18 06:34, Zenghui Yu wrote:
>>>> Hi Marc,
>>>>
>>>> On 2020/3/5 4:33, Marc Zyngier wrote:
>>>>> The GICv4.1 architecture gives the hypervisor the option to let
>>>>> the guest choose whether it wants the good old SGIs with an
>>>>> active state, or the new, HW-based ones that do not have one.
>>>>>
>>>>> For this, plumb the configuration of SGIs into the GICv3 MMIO
>>>>> handling, present the GICD_TYPER2.nASSGIcap to the guest,
>>>>> and handle the GICD_CTLR.nASSGIreq setting.
>>>>>
>>>>> In order to be able to deal with the restore of a guest, also
>>>>> apply the GICD_CTLR.nASSGIreq setting at first run so that we
>>>>> can move the restored SGIs to the HW if that's what the guest
>>>>> had selected in a previous life.
>>>>
>>>> I'm okay with the restore path.  But it seems that we still fail to
>>>> save the pending state of vSGI - software pending_latch of HW-based
>>>> vSGIs will not be updated (and always be false) because we directly
>>>> inject them through ITS, so vgic_v3_uaccess_read_pending() can't
>>>> tell the correct pending state to user-space (the correct one should
>>>> be latched in HW).
>>>>
>>>> It would be good if we can sync the hardware state into pending_latch
>>>> at an appropriate time (just before save), but not sure if we can...
>>>
>>> The problem is to find the "appropriate time". It would require to
>>> define
>>> a point in the save sequence where we transition the state from HW to
>>> SW. I'm not keen on adding more state than we already have.
>>
>> may be we could use a dedicated device group/attr as we have for the ITS
>> save tables? the user space would choose.
> 
> It means that userspace will be aware of some form of GICv4.1 details
> (e.g., get/set vSGI state at HW level) that KVM has implemented.
> Is it something that userspace required to know? I'm open to this ;-)
Not sure we would be obliged to expose fine details. This could be a
generic save/restore device group/attr whose implementation at KVM level
could differ depending on the version being implemented, no?

Thanks

Eric
> 
>>
>> Thanks
>>
>> Eric
>>>
>>> But what we can do is to just ask the HW to give us the right state
>>> on userspace access, at all times. How about this:
>>>
>>> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c
>>> b/virt/kvm/arm/vgic/vgic-mmio-v3.c
>>> index 48fd9fc229a2..281fe7216c59 100644
>>> --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
>>> +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
>>> @@ -305,8 +305,18 @@ static unsigned long
>>> vgic_v3_uaccess_read_pending(struct kvm_vcpu *vcpu,
>>>        */
>>>       for (i = 0; i < len * 8; i++) {
>>>           struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid
>>> + i);
>>> +        bool state = irq->pending_latch;
>>>
>>> -        if (irq->pending_latch)
>>> +        if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
>>> +            int err;
>>> +
>>> +            err = irq_get_irqchip_state(irq->host_irq,
>>> +                            IRQCHIP_STATE_PENDING,
>>> +                            &state);
>>> +            WARN_ON(err);
>>> +        }
>>> +
>>> +        if (state)
>>>               value |= (1U << i);
>>>
>>>           vgic_put_irq(vcpu->kvm, irq);
> 
> Anyway this looks good to me and will do the right thing on a userspace
> save.
> 
>>>
>>> I can add this to "KVM: arm64: GICv4.1: Add direct injection capability
>>> to SGI registers".
> 
> Thanks,
> Zenghui
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ