[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1696011549-28036-3-git-send-email-mikelley@microsoft.com>
Date: Fri, 29 Sep 2023 11:19:06 -0700
From: Michael Kelley <mikelley@...rosoft.com>
To: kys@...rosoft.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
decui@...rosoft.com, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
luto@...nel.org, peterz@...radead.org, thomas.lendacky@....com,
sathyanarayanan.kuppuswamy@...ux.intel.com,
kirill.shutemov@...ux.intel.com, seanjc@...gle.com,
rick.p.edgecombe@...el.com, linux-kernel@...r.kernel.org,
linux-hyperv@...r.kernel.org, x86@...nel.org
Cc: mikelley@...rosoft.com
Subject: [PATCH 2/5] x86/mm: Don't do a TLB flush if changing a PTE that isn't marked present
The core function __change_page_attr() currently sets up a TLB flush if
a PTE is changed. But if the old value of the PTE doesn't include the
PRESENT flag, the PTE won't be in the TLB, so a flush isn't needed.
Avoid an unnecessary TLB flush by conditioning the flush on the old
PTE value including PRESENT. This change improves the performance of
functions like set_memory_p() by avoiding the flush if the memory range
was previously all not present.
Signed-off-by: Michael Kelley <mikelley@...rosoft.com>
---
arch/x86/mm/pat/set_memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 8e19796..d7ef8d3 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1636,7 +1636,10 @@ static int __change_page_attr(struct cpa_data *cpa, int primary)
*/
if (pte_val(old_pte) != pte_val(new_pte)) {
set_pte_atomic(kpte, new_pte);
- cpa->flags |= CPA_FLUSHTLB;
+
+ /* If old_pte isn't present, it's not in the TLB */
+ if (pte_present(old_pte))
+ cpa->flags |= CPA_FLUSHTLB;
}
cpa->numpages = 1;
return 0;
--
1.8.3.1
Powered by blists - more mailing lists