[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b837b832-7085-c74d-3f9b-08335081f702@huawei.com>
Date: Wed, 6 Jan 2021 13:48:22 +0800
From: Shenming Lu <lushenming@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: Eric Auger <eric.auger@...hat.com>, Will Deacon <will@...nel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.cs.columbia.edu>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>,
Alex Williamson <alex.williamson@...hat.com>,
Cornelia Huck <cohuck@...hat.com>,
"Lorenzo Pieralisi" <lorenzo.pieralisi@....com>,
<wanghaibin.wang@...wei.com>, <yuzenghui@...wei.com>
Subject: Re: [RFC PATCH v2 2/4] KVM: arm64: GICv4.1: Try to save hw pending
state in save_pending_tables
On 2021/1/5 19:40, Marc Zyngier wrote:
> On 2021-01-05 09:13, Marc Zyngier wrote:
>> On 2021-01-04 08:16, Shenming Lu wrote:
>>> After pausing all vCPUs and devices capable of interrupting, in order
>>> to save the information of all interrupts, besides flushing the pending
>>> states in kvm’s vgic, we also try to flush the states of VLPIs in the
>>> virtual pending tables into guest RAM, but we need to have GICv4.1 and
>>> safely unmap the vPEs first.
>>>
>>> Signed-off-by: Shenming Lu <lushenming@...wei.com>
>>> ---
>>> arch/arm64/kvm/vgic/vgic-v3.c | 58 +++++++++++++++++++++++++++++++----
>>> 1 file changed, 52 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
>>> index 9cdf39a94a63..a58c94127cb0 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-v3.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-v3.c
>>> @@ -1,6 +1,8 @@
>>> // SPDX-License-Identifier: GPL-2.0-only
>>>
>>> #include <linux/irqchip/arm-gic-v3.h>
>>> +#include <linux/irq.h>
>>> +#include <linux/irqdomain.h>
>>> #include <linux/kvm.h>
>>> #include <linux/kvm_host.h>
>>> #include <kvm/arm_vgic.h>
>>> @@ -356,6 +358,38 @@ int vgic_v3_lpi_sync_pending_status(struct kvm
>>> *kvm, struct vgic_irq *irq)
>>> return 0;
>>> }
>>>
>>> +/*
>>> + * The deactivation of the doorbell interrupt will trigger the
>>> + * unmapping of the associated vPE.
>>> + */
>>> +static void unmap_all_vpes(struct vgic_dist *dist)
>>> +{
>>> + struct irq_desc *desc;
>>> + int i;
>>> +
>>> + if (!kvm_vgic_global_state.has_gicv4_1)
>>> + return;
>>> +
>>> + for (i = 0; i < dist->its_vm.nr_vpes; i++) {
>>> + desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
>>> + irq_domain_deactivate_irq(irq_desc_get_irq_data(desc));
>>> + }
>>> +}
>>> +
>>> +static void map_all_vpes(struct vgic_dist *dist)
>>> +{
>>> + struct irq_desc *desc;
>>> + int i;
>>> +
>>> + if (!kvm_vgic_global_state.has_gicv4_1)
>>> + return;
>>> +
>>> + for (i = 0; i < dist->its_vm.nr_vpes; i++) {
>>> + desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
>>> + irq_domain_activate_irq(irq_desc_get_irq_data(desc), false);
>>> + }
>>> +}
>>> +
>>> /**
>>> * vgic_v3_save_pending_tables - Save the pending tables into guest RAM
>>> * kvm lock and all vcpu lock must be held
>>> @@ -365,14 +399,18 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
>>> struct vgic_dist *dist = &kvm->arch.vgic;
>>> struct vgic_irq *irq;
>>> gpa_t last_ptr = ~(gpa_t)0;
>>> - int ret;
>>> + int ret = 0;
>>> u8 val;
>>>
>>> + /* As a preparation for getting any VLPI states. */
>>> + unmap_all_vpes(dist);
>>
>> What if the VPEs are not mapped yet? Is it possible to snapshot a VM
>> that has not run at all?
>
> More questions: what happens to vSGIs that were mapped to the VPEs?
> Can they safely be restarted? The spec is not saying much on the subject.
Since we have already paused all vCPUs, there would be no more vSGIs generated,
and also no vSGI would be delivered to the vPE. And the unmapping of the
vPE would not affect the (already) stored vSGI states... I think they could
be safely restarted.
>
> Once the unmap has taken place, it won't be possible to read their state
> via GICR_VSGIRPEND, and only the memory state can be used. This probably
> needs to be tracked as well.
Yes, since we will map the vPEs back, could we assume that the saving of the
vLPI and vSGI states happen serially? In fact that's what QEMU does.
>
> Thanks,
>
> M.
Powered by blists - more mailing lists