[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200416160939.7e9c1621@why>
Date: Thu, 16 Apr 2020 16:09:39 +0100
From: Marc Zyngier <maz@...nel.org>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Keqian Zhu <zhukeqian1@...wei.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.cs.columbia.edu, James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Will Deacon <will@...nel.org>,
Suzuki K Poulose <suzuki.poulose@....com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Jay Zhou <jianjay.zhou@...wei.com>, wanghaibin.wang@...wei.com
Subject: Re: [PATCH v2] KVM/arm64: Support enabling dirty log gradually in
small chunks
On Wed, 15 Apr 2020 18:13:56 +0200
Paolo Bonzini <pbonzini@...hat.com> wrote:
> On 13/04/20 14:20, Keqian Zhu wrote:
> > There is already support of enabling dirty log graually in small chunks
> > for x86 in commit 3c9bd4006bfc ("KVM: x86: enable dirty log gradually in
> > small chunks"). This adds support for arm64.
> >
> > x86 still writes protect all huge pages when DIRTY_LOG_INITIALLY_ALL_SET
> > is eanbled. However, for arm64, both huge pages and normal pages can be
> > write protected gradually by userspace.
> >
> > Under the Huawei Kunpeng 920 2.6GHz platform, I did some tests on 128G
> > Linux VMs with different page size. The memory pressure is 127G in each
> > case. The time taken of memory_global_dirty_log_start in QEMU is listed
> > below:
> >
> > Page Size Before After Optimization
> > 4K 650ms 1.8ms
> > 2M 4ms 1.8ms
> > 1G 2ms 1.8ms
> >
> > Besides the time reduction, the biggest income is that we will minimize
> > the performance side effect (because of dissloving huge pages and marking
> > memslots dirty) on guest after enabling dirty log.
> >
> > Signed-off-by: Keqian Zhu <zhukeqian1@...wei.com>
> > ---
> > Documentation/virt/kvm/api.rst | 2 +-
> > arch/arm64/include/asm/kvm_host.h | 3 +++
> > virt/kvm/arm/mmu.c | 12 ++++++++++--
> > 3 files changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> > index efbbe570aa9b..0017f63fa44f 100644
> > --- a/Documentation/virt/kvm/api.rst
> > +++ b/Documentation/virt/kvm/api.rst
> > @@ -5777,7 +5777,7 @@ will be initialized to 1 when created. This also improves performance because
> > dirty logging can be enabled gradually in small chunks on the first call
> > to KVM_CLEAR_DIRTY_LOG. KVM_DIRTY_LOG_INITIALLY_SET depends on
> > KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (it is also only available on
> > -x86 for now).
> > +x86 and arm64 for now).
> >
> > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 was previously available under the name
> > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT, but the implementation had bugs that make
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 32c8a675e5a4..a723f84fab83 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -46,6 +46,9 @@
> > #define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
> > #define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)
> >
> > +#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
> > + KVM_DIRTY_LOG_INITIALLY_SET)
> > +
> > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
> >
> > extern unsigned int kvm_sve_max_vl;
> > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > index e3b9ee268823..1077f653a611 100644
> > --- a/virt/kvm/arm/mmu.c
> > +++ b/virt/kvm/arm/mmu.c
> > @@ -2265,8 +2265,16 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> > * allocated dirty_bitmap[], dirty pages will be be tracked while the
> > * memory slot is write protected.
> > */
> > - if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES)
> > - kvm_mmu_wp_memory_region(kvm, mem->slot);
> > + if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) {
> > + /*
> > + * If we're with initial-all-set, we don't need to write
> > + * protect any pages because they're all reported as dirty.
> > + * Huge pages and normal pages will be write protect gradually.
> > + */
> > + if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) {
> > + kvm_mmu_wp_memory_region(kvm, mem->slot);
> > + }
> > + }
> > }
> >
> > int kvm_arch_prepare_memory_region(struct kvm *kvm,
> >
>
> Marc, what is the status of this patch?
I just had a look at it. Is there any urgency for merging it?
Thanks,
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists