[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <242f066aaa5f76861e7fe202944073b9@kernel.org>
Date: Fri, 20 Mar 2020 11:20:05 +0000
From: Marc Zyngier <maz@...nel.org>
To: Auger Eric <eric.auger@...hat.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Jason Cooper <jason@...edaemon.net>, kvm@...r.kernel.org,
Suzuki K Poulose <suzuki.poulose@....com>,
linux-kernel@...r.kernel.org,
Robert Richter <rrichter@...vell.com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Zenghui Yu <yuzenghui@...wei.com>,
Thomas Gleixner <tglx@...utronix.de>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v5 20/23] KVM: arm64: GICv4.1: Plumb SGI implementation
selection in the distributor
Hi Eric,
On 2020-03-20 11:09, Auger Eric wrote:
> Hi Marc,
[...]
>>>> It means that userspace will be aware of some form of GICv4.1
>>>> details
>>>> (e.g., get/set vSGI state at HW level) that KVM has implemented.
>>>> Is it something that userspace required to know? I'm open to this
>>>> ;-)
>>> Not sure we would be obliged to expose fine details. This could be a
>>> generic save/restore device group/attr whose implementation at KVM
>>> level
>>> could differ depending on the version being implemented, no?
>>
>> What prevents us from hooking this synchronization to the current
>> behaviour
>> of KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES? After all, this is already
>> the
>> point
>> where we synchronize the KVM view of the pending state with userspace.
>> Here, it's just a matter of picking the information from some other
>> place
>> (i.e. the host's virtual pending table).
> agreed
>>
>> The thing we need though is the guarantee that the guest isn't going
>> to
>> get more vLPIs at that stage, as they would be lost. This effectively
>> assumes that we can also save/restore the state of the signalling
>> devices,
>> and I don't know if we're quite there yet.
> On QEMU, when KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES is called, the VM is
> stopped.
> See cddafd8f353d ("hw/intc/arm_gicv3_its: Implement state
> save/restore")
> So I think it should work, no?
The guest being stopped is a good start. But my concern is on the device
side.
If the device is still active (generating interrupts), these interrupts
will
be dropped because the vPE will have been unmapped from the ITS in order
to
clean the ITS caches and make sure the virtual pending table is up to
date.
In turn, restoring the guest may lead to a lockup because we would have
lost
these interrupts. What does QEMU on x86 do in this case?
Thanks,
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists