[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200616093553.27512-1-zhukeqian1@huawei.com>
Date: Tue, 16 Jun 2020 17:35:41 +0800
From: Keqian Zhu <zhukeqian1@...wei.com>
To: <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.cs.columbia.edu>, <kvm@...r.kernel.org>
CC: Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
James Morse <james.morse@....com>,
Will Deacon <will@...nel.org>,
"Suzuki K Poulose" <suzuki.poulose@....com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Mark Brown <broonie@...nel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexios Zavras <alexios.zavras@...el.com>,
<liangpeng10@...wei.com>, <zhengxiang9@...wei.com>,
<wanghaibin.wang@...wei.com>, Keqian Zhu <zhukeqian1@...wei.com>
Subject: [PATCH 00/12] KVM: arm64: Support stage2 hardware DBM
This patch series add support for stage2 hardware DBM, and it is only
used for dirty log for now.
It works well under some migration test cases, including VM with 4K
pages or 2M THP. I checked the SHA256 hash digest of all memory and
they keep same for source VM and destination VM, which means no dirty
pages is missed under hardware DBM.
Some key points:
1. Only support hardware updates of dirty status for PTEs. PMDs and PUDs
are not involved for now.
2. About *performance*: In RFC patch, I have mentioned that for every 64GB
memory, KVM consumes about 40ms to scan all PTEs to collect dirty log.
Initially, I plan to solve this problem using parallel CPUs. However
I faced two problems.
The first is bottleneck of memory bandwith. Single thread will occupy
bandwidth about 500GB/s, we can support about 4 parallel threads at
most, so the ideal speedup ratio is low.
The second is huge impact on other CPUs. To scan PTs quickly, I use
smp_call_function_many, which is based on IPI, to dispatch workload
on other CPUs. Though it can complete work in time, the interrupt is
disabled during scaning PTs, which has huge impact on other CPUs.
Now, I make hardware dirty log can be dynamic enabled and disabled.
Userspace can enable it before VM migration and disable it when
remaining dirty pages is little. Thus VM downtime is not affected.
3. About correctness: Only add DBM bit when PTE is already writable, so
we still have readonly PTE and some mechanisms which rely on readonly
PTs are not broken.
4. About PTs modification races: There are two kinds of PTs modification.
The first is adding or clearing specific bit, such as AF or RW. All
these operations have been converted to be atomic, avoid covering
dirty status set by hardware.
The second is replacement, such as PTEs unmapping or changement. All
these operations will invoke kvm_set_pte finally. kvm_set_pte have
been converted to be atomic and we save the dirty status to underlying
bitmap if dirty status is coverred.
Keqian Zhu (12):
KVM: arm64: Add some basic functions to support hw DBM
KVM: arm64: Modify stage2 young mechanism to support hw DBM
KVM: arm64: Report hardware dirty status of stage2 PTE if coverred
KVM: arm64: Support clear DBM bit for PTEs
KVM: arm64: Add KVM_CAP_ARM_HW_DIRTY_LOG capability
KVM: arm64: Set DBM bit of PTEs during write protecting
KVM: arm64: Scan PTEs to sync dirty log
KVM: Omit dirty log sync in log clear if initially all set
KVM: arm64: Steply write protect page table by mask bit
KVM: arm64: Save stage2 PTE dirty status if it is coverred
KVM: arm64: Support disable hw dirty log after enable
KVM: arm64: Enable stage2 hardware DBM
arch/arm64/include/asm/kvm_host.h | 11 +
arch/arm64/include/asm/kvm_mmu.h | 56 +++-
arch/arm64/include/asm/sysreg.h | 2 +
arch/arm64/kvm/arm.c | 22 +-
arch/arm64/kvm/mmu.c | 411 ++++++++++++++++++++++++++++--
arch/arm64/kvm/reset.c | 14 +-
include/uapi/linux/kvm.h | 1 +
tools/include/uapi/linux/kvm.h | 1 +
virt/kvm/kvm_main.c | 7 +-
9 files changed, 499 insertions(+), 26 deletions(-)
--
2.19.1
Powered by blists - more mailing lists