[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220718120212.3180-7-namit@vmware.com>
Date: Mon, 18 Jul 2022 05:02:04 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Nadav Amit <namit@...are.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Peter Xu <peterx@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Nick Piggin <npiggin@...il.com>
Subject: [RFC PATCH 06/14] mm/rmap: avoid flushing on page_vma_mkclean_one() when possible
From: Nadav Amit <namit@...are.com>
x86 is capable to avoid TLB flush on clean writable entries.
page_vma_mkclean_one() does not take advantage of this behavior. Adapt
it.
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Peter Xu <peterx@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will@...nel.org>
Cc: Yu Zhao <yuzhao@...gle.com>
Cc: Nick Piggin <npiggin@...il.com>
Signed-off-by: Nadav Amit <namit@...are.com>
---
mm/rmap.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 83172ee0ea35..23997c387858 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -961,17 +961,25 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
address = pvmw->address;
if (pvmw->pte) {
- pte_t entry;
+ pte_t entry, oldpte;
pte_t *pte = pvmw->pte;
if (!pte_dirty(*pte) && !pte_write(*pte))
continue;
flush_cache_page(vma, address, pte_pfn(*pte));
- entry = ptep_clear_flush(vma, address, pte);
- entry = pte_wrprotect(entry);
+ oldpte = ptep_modify_prot_start(pvmw->vma, address,
+ pte);
+
+ entry = pte_wrprotect(oldpte);
entry = pte_mkclean(entry);
- set_pte_at(vma->vm_mm, address, pte, entry);
+
+ if (pte_needs_flush(oldpte, entry) ||
+ mm_tlb_flush_pending(vma->vm_mm))
+ flush_tlb_page(vma, address);
+
+ ptep_modify_prot_commit(vma, address, pte, oldpte,
+ entry);
ret = 1;
} else {
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
--
2.25.1
Powered by blists - more mailing lists