[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y8cKQRIbpLWVcdcw@google.com>
Date: Tue, 17 Jan 2023 20:51:13 +0000
From: Oliver Upton <oliver.upton@...ux.dev>
To: Gavin Shan <gshan@...hat.com>
Cc: kvmarm@...ts.linux.dev, kvmarm@...ts.cs.columbia.edu,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, maz@...nel.org, corbet@....net,
james.morse@....com, suzuki.poulose@....com, yuzenghui@...wei.com,
catalin.marinas@....com, will@...nel.org, ricarkol@...gle.com,
eric.auger@...hat.com, yuzhe@...china.com, renzhengeek@...il.com,
ardb@...nel.org, peterx@...hat.com, seanjc@...gle.com,
shan.gavin@...il.com
Subject: Re: [PATCH 1/4] KVM: arm64: Allow saving vgic3 LPI pending status in
no running vcpu context
Hi Gavin,
On Mon, Jan 16, 2023 at 12:04:02PM +0800, Gavin Shan wrote:
> When dirty ring is enabled, the dirty page information is pushed to
> the dirty ring if there is a running VCPU context. Otherwise, the
> dirty page information is still tracked by the backup dirty bitmap.
> In order to detect if there is a running VCPU context when a guest
> page becomes dirty, kvm_arch_allow_write_without_running_vcpu() was
> introduced to warn when no running VCPU context exists on unknown
> cases.
>
> Other than the site of saving ITS tables, it's possible to save vgic3
> LPI pending status in no running vcpu context because it can happen when
> ITS ITE is restored through the command KVM_DEV_ARM_ITS_RESTORE_TABLES
> on 'kvm-arm-vgic-its' device.
>
> Fix it by allowing to save vgic3 LPI pending status in no running
> vcpu context.
>
> Signed-off-by: Gavin Shan <gshan@...hat.com>
> ---
> Documentation/virt/kvm/api.rst | 5 +++--
> arch/arm64/kvm/vgic/vgic-its.c | 3 ++-
> arch/arm64/kvm/vgic/vgic-v3.c | 3 +++
> include/kvm/arm_vgic.h | 1 +
> 4 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 9807b05a1b57..18b245a0ba02 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -8071,8 +8071,9 @@ state is final and avoid missing dirty pages from another ioctl ordered
> after the bitmap collection.
>
> NOTE: One example of using the backup bitmap is saving arm64 vgic/its
> -tables through KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} command on
> -KVM device "kvm-arm-vgic-its" when dirty ring is enabled.
> +tables and vgic3 LPI pending status through KVM_DEV_ARM_{VGIC_GRP_CTRL,
> +ITS_SAVE_TABLES} and KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_RESTORE_TABLES}
> +command on KVM device "kvm-arm-vgic-its" when dirty ring is enabled.
>
> 8.30 KVM_CAP_XEN_HVM
> --------------------
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index 94a666dd1443..119a9c7a0a52 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -2792,7 +2792,8 @@ bool kvm_arch_allow_write_without_running_vcpu(struct kvm *kvm)
> {
> struct vgic_dist *dist = &kvm->arch.vgic;
>
> - return dist->save_its_tables_in_progress;
> + return dist->save_vgic_v3_tables_in_progress ||
> + dist->save_its_tables_in_progress;
I'd much prefer using a single bool to keep track of this, i.e:
return dist->save_tables_in_progress;
> }
>
> static int vgic_its_set_attr(struct kvm_device *dev,
> diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> index 2074521d4a8c..32998c8587a8 100644
> --- a/arch/arm64/kvm/vgic/vgic-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> @@ -304,6 +304,7 @@ void vgic_v3_enable(struct kvm_vcpu *vcpu)
> int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq)
> {
> struct kvm_vcpu *vcpu;
> + struct vgic_dist *dist = &kvm->arch.vgic;
> int byte_offset, bit_nr;
> gpa_t pendbase, ptr;
> bool status;
> @@ -339,7 +340,9 @@ int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq)
> if (status) {
> /* clear consumed data */
> val &= ~(1 << bit_nr);
> + dist->save_vgic_v3_tables_in_progress = true;
> ret = kvm_write_guest_lock(kvm, ptr, &val, 1);
> + dist->save_vgic_v3_tables_in_progress = false;
With the above suggestion of using a bool, this should become a helper
used at all the affected callsites:
static int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa,
const void *data, unsigned long len)
{
struct vgic_dist *dist = &kvm->arch.vgic;
int ret;
dist->save_tables_in_progress = true;
ret = kvm_write_guest_lock(kvm, gpa, data, len);
dist->save_tables_in_progress = false;
return ret;
}
--
Thanks,
Oliver
Powered by blists - more mailing lists