[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200616093553.27512-11-zhukeqian1@huawei.com>
Date: Tue, 16 Jun 2020 17:35:51 +0800
From: Keqian Zhu <zhukeqian1@...wei.com>
To: <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.cs.columbia.edu>, <kvm@...r.kernel.org>
CC: Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
James Morse <james.morse@....com>,
Will Deacon <will@...nel.org>,
"Suzuki K Poulose" <suzuki.poulose@....com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Mark Brown <broonie@...nel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexios Zavras <alexios.zavras@...el.com>,
<liangpeng10@...wei.com>, <zhengxiang9@...wei.com>,
<wanghaibin.wang@...wei.com>, Keqian Zhu <zhukeqian1@...wei.com>
Subject: [PATCH 10/12] KVM: arm64: Save stage2 PTE dirty status if it is coverred
There are two types of operations will change PTE and may cover
dirty status set by hardware.
1. Stage2 PTE unmapping: Page table merging (revert of huge page
table dissolving), kvm_unmap_hva_range() and so on.
2. Stage2 PTE changing: including user_mem_abort(), kvm_mmu_notifier
_change_pte() and so on.
All operations above will invoke kvm_set_pte() finally. We should
save the dirty status into memslot bitmap.
Question: Should we acquire kvm_slots_lock when invoke mark_page_dirty?
It seems that user_mem_abort does not acquire this lock when invoke it.
Signed-off-by: Keqian Zhu <zhukeqian1@...wei.com>
---
arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 898e272a2c07..a230fbcf3889 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -294,15 +294,23 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd,
{
phys_addr_t start_addr = addr;
pte_t *pte, *start_pte;
+ bool dirty_coverred;
+ int idx;
start_pte = pte = pte_offset_kernel(pmd, addr);
do {
if (!pte_none(*pte)) {
pte_t old_pte = *pte;
- kvm_set_pte(pte, __pte(0));
+ dirty_coverred = kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
+ if (dirty_coverred) {
+ idx = srcu_read_lock(&kvm->srcu);
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ srcu_read_unlock(&kvm->srcu, idx);
+ }
+
/* No need to invalidate the cache for device mappings */
if (!kvm_is_device_pfn(pte_pfn(old_pte)))
kvm_flush_dcache_pte(old_pte);
@@ -1388,6 +1396,8 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
pte_t *pte, old_pte;
bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP;
bool logging_active = flags & KVM_S2_FLAG_LOGGING_ACTIVE;
+ bool dirty_coverred;
+ int idx;
VM_BUG_ON(logging_active && !cache);
@@ -1453,8 +1463,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
if (pte_val(old_pte) == pte_val(*new_pte))
return 0;
- kvm_set_pte(pte, __pte(0));
+ dirty_coverred = kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
+
+ if (dirty_coverred) {
+ idx = srcu_read_lock(&kvm->srcu);
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ srcu_read_unlock(&kvm->srcu, idx);
+ }
} else {
get_page(virt_to_page(pte));
}
--
2.19.1
Powered by blists - more mailing lists