lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Mar 2020 12:09:15 +0100
From:   Auger Eric <eric.auger@...hat.com>
To:     Marc Zyngier <maz@...nel.org>
Cc:     Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Jason Cooper <jason@...edaemon.net>, kvm@...r.kernel.org,
        Suzuki K Poulose <suzuki.poulose@....com>,
        linux-kernel@...r.kernel.org,
        Robert Richter <rrichter@...vell.com>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Zenghui Yu <yuzenghui@...wei.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v5 20/23] KVM: arm64: GICv4.1: Plumb SGI implementation
 selection in the distributor

Hi Marc,

On 3/20/20 10:46 AM, Marc Zyngier wrote:
> On 2020-03-20 07:59, Auger Eric wrote:
>> Hi Zenghui,
>>
>> On 3/20/20 4:08 AM, Zenghui Yu wrote:
>>> On 2020/3/20 4:38, Auger Eric wrote:
>>>> Hi Marc,
>>>> On 3/19/20 1:10 PM, Marc Zyngier wrote:
>>>>> Hi Zenghui,
>>>>>
>>>>> On 2020-03-18 06:34, Zenghui Yu wrote:
>>>>>> Hi Marc,
>>>>>>
>>>>>> On 2020/3/5 4:33, Marc Zyngier wrote:
>>>>>>> The GICv4.1 architecture gives the hypervisor the option to let
>>>>>>> the guest choose whether it wants the good old SGIs with an
>>>>>>> active state, or the new, HW-based ones that do not have one.
>>>>>>>
>>>>>>> For this, plumb the configuration of SGIs into the GICv3 MMIO
>>>>>>> handling, present the GICD_TYPER2.nASSGIcap to the guest,
>>>>>>> and handle the GICD_CTLR.nASSGIreq setting.
>>>>>>>
>>>>>>> In order to be able to deal with the restore of a guest, also
>>>>>>> apply the GICD_CTLR.nASSGIreq setting at first run so that we
>>>>>>> can move the restored SGIs to the HW if that's what the guest
>>>>>>> had selected in a previous life.
>>>>>>
>>>>>> I'm okay with the restore path.  But it seems that we still fail to
>>>>>> save the pending state of vSGI - software pending_latch of HW-based
>>>>>> vSGIs will not be updated (and always be false) because we directly
>>>>>> inject them through ITS, so vgic_v3_uaccess_read_pending() can't
>>>>>> tell the correct pending state to user-space (the correct one should
>>>>>> be latched in HW).
>>>>>>
>>>>>> It would be good if we can sync the hardware state into pending_latch
>>>>>> at an appropriate time (just before save), but not sure if we can...
>>>>>
>>>>> The problem is to find the "appropriate time". It would require to
>>>>> define
>>>>> a point in the save sequence where we transition the state from HW to
>>>>> SW. I'm not keen on adding more state than we already have.
>>>>
>>>> may be we could use a dedicated device group/attr as we have for the
>>>> ITS
>>>> save tables? the user space would choose.
>>>
>>> It means that userspace will be aware of some form of GICv4.1 details
>>> (e.g., get/set vSGI state at HW level) that KVM has implemented.
>>> Is it something that userspace required to know? I'm open to this ;-)
>> Not sure we would be obliged to expose fine details. This could be a
>> generic save/restore device group/attr whose implementation at KVM level
>> could differ depending on the version being implemented, no?
> 
> What prevents us from hooking this synchronization to the current behaviour
> of KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES? After all, this is already the
> point
> where we synchronize the KVM view of the pending state with userspace.
> Here, it's just a matter of picking the information from some other place
> (i.e. the host's virtual pending table).
agreed
> 
> The thing we need though is the guarantee that the guest isn't going to
> get more vLPIs at that stage, as they would be lost. This effectively
> assumes that we can also save/restore the state of the signalling devices,
> and I don't know if we're quite there yet.
On QEMU, when KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES is called, the VM is
stopped.
See cddafd8f353d ("hw/intc/arm_gicv3_its: Implement state save/restore")
So I think it should work, no?

Thanks

Eric

> 
> Thanks,
> 
>         M.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ