[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210126124444.27136-3-zhukeqian1@huawei.com>
Date: Tue, 26 Jan 2021 20:44:39 +0800
From: Keqian Zhu <zhukeqian1@...wei.com>
To: <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>, <kvm@...r.kernel.org>,
<kvmarm@...ts.cs.columbia.edu>, Marc Zyngier <maz@...nel.org>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>
CC: Alex Williamson <alex.williamson@...hat.com>,
Kirti Wankhede <kwankhede@...dia.com>,
Cornelia Huck <cohuck@...hat.com>,
Mark Rutland <mark.rutland@....com>,
James Morse <james.morse@....com>,
Robin Murphy <robin.murphy@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
<wanghaibin.wang@...wei.com>, <jiangkunkun@...wei.com>,
<xiexiangyou@...wei.com>, <zhengchuan@...wei.com>,
<yubihong@...wei.com>
Subject: [RFC PATCH 2/7] kvm: arm64: Use atomic operation when update PTE
We are about to add HW_DBM support for stage2 dirty log, so software
updating PTE may race with the MMU trying to set the access flag or
dirty state.
Use atomic oparations to avoid reverting these bits set by MMU.
Signed-off-by: Keqian Zhu <zhukeqian1@...wei.com>
---
arch/arm64/kvm/hyp/pgtable.c | 41 ++++++++++++++++++++++++------------
1 file changed, 27 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index bdf8e55ed308..4915ba35f93b 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -153,10 +153,34 @@ static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte)
return __va(kvm_pte_to_phys(pte));
}
+/*
+ * We may race with the MMU trying to set the access flag or dirty state,
+ * use atomic oparations to avoid reverting these bits.
+ *
+ * Return original PTE.
+ */
+static kvm_pte_t kvm_update_pte(kvm_pte_t *ptep, kvm_pte_t bit_set,
+ kvm_pte_t bit_clr)
+{
+ kvm_pte_t old_pte, pte = *ptep;
+
+ do {
+ old_pte = pte;
+ pte &= ~bit_clr;
+ pte |= bit_set;
+
+ if (old_pte == pte)
+ break;
+
+ pte = cmpxchg_relaxed(ptep, old_pte, pte);
+ } while (pte != old_pte);
+
+ return old_pte;
+}
+
static void kvm_set_invalid_pte(kvm_pte_t *ptep)
{
- kvm_pte_t pte = *ptep;
- WRITE_ONCE(*ptep, pte & ~KVM_PTE_VALID);
+ kvm_update_pte(ptep, 0, KVM_PTE_VALID);
}
static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp)
@@ -723,18 +747,7 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
return 0;
data->level = level;
- data->pte = pte;
- pte &= ~data->attr_clr;
- pte |= data->attr_set;
-
- /*
- * We may race with the CPU trying to set the access flag here,
- * but worst-case the access flag update gets lost and will be
- * set on the next access instead.
- */
- if (data->pte != pte)
- WRITE_ONCE(*ptep, pte);
-
+ data->pte = kvm_update_pte(ptep, data->attr_set, data->attr_clr);
return 0;
}
--
2.19.1
Powered by blists - more mailing lists