[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201130121847.91808-3-wangyanan55@huawei.com>
Date: Mon, 30 Nov 2020 20:18:46 +0800
From: Yanan Wang <wangyanan55@...wei.com>
To: <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
Marc Zyngier <maz@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
James Morse <james.morse@....com>,
"Julien Thierry" <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Gavin Shan <gshan@...hat.com>,
Quentin Perret <qperret@...gle.com>
CC: <wanghaibin.wang@...wei.com>, <yezengruan@...wei.com>,
<zhukeqian1@...wei.com>, <yuzenghui@...wei.com>,
<jiangkunkun@...wei.com>, <wangjingyi11@...wei.com>,
<lushenming@...wei.com>, Yanan Wang <wangyanan55@...wei.com>
Subject: [RFC PATCH 2/3] KVM: arm64: Fix handling of merging tables into a block entry
In dirty logging case(logging_active == True), we need to collapse a block
entry into a table if necessary. After dirty logging is canceled, when merging
tables back into a block entry, we should not only free the non-huge page
tables but also unmap the non-huge mapping for the block. Without the unmap,
inconsistent TLB entries for the pages in the the block will be created.
We could also use unmap_stage2_range API to unmap the non-huge mapping,
but this could potentially free the upper level page-table page which
will be useful later.
Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
---
arch/arm64/kvm/hyp/pgtable.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 696b6aa83faf..fec8dc9f2baa 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
return 0;
}
+static void stage2_flush_dcache(void *addr, u64 size);
+static bool stage2_pte_cacheable(kvm_pte_t pte);
+
static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct stage2_map_data *data)
{
@@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct page *page = virt_to_page(ptep);
if (data->anchor) {
- if (kvm_pte_valid(pte))
+ if (kvm_pte_valid(pte)) {
+ kvm_set_invalid_pte(ptep);
+ kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu,
+ addr, level);
put_page(page);
+ if (stage2_pte_cacheable(pte))
+ stage2_flush_dcache(kvm_pte_follow(pte),
+ kvm_granule_size(level));
+ }
+
return 0;
}
@@ -574,7 +585,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level,
* The behaviour of the LEAF callback then depends on whether or not the
* anchor has been set. If not, then we're not using a block mapping higher
* up the table and we perform the mapping at the existing leaves instead.
- * If, on the other hand, the anchor _is_ set, then we drop references to
+ * If, on the other hand, the anchor _is_ set, then we unmap the mapping of
* all valid leaves so that the pages beneath the anchor can be freed.
*
* Finally, the TABLE_POST callback does nothing if the anchor has not
--
2.19.1
Powered by blists - more mailing lists