[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220718120212.3180-14-namit@vmware.com>
Date: Mon, 18 Jul 2022 05:02:11 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Nadav Amit <namit@...are.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Peter Xu <peterx@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Nick Piggin <npiggin@...il.com>
Subject: [RFC PATCH 13/14] mm/mprotect: do not check flush type if a strict is needed
From: Nadav Amit <namit@...are.com>
Once it was determined that a strict TLB flush is needed, it is both
likely that other PTEs would need strict TLB flush and little benefit
from not extending the range that is flushed.
Skip the check which TLB flush is needed, if it was determined a strict
flush is already needed.
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Peter Xu <peterx@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will@...nel.org>
Cc: Yu Zhao <yuzhao@...gle.com>
Cc: Nick Piggin <npiggin@...il.com>
Signed-off-by: Nadav Amit <namit@...are.com>
---
mm/huge_memory.c | 4 +++-
mm/mprotect.c | 4 +++-
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 09e6608a6431..b32b7da0f6f7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1816,7 +1816,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
ret = HPAGE_PMD_NR;
set_pmd_at(mm, addr, pmd, entry);
- flush_type = huge_pmd_flush_type(oldpmd, entry);
+ flush_type = PTE_FLUSH_STRICT;
+ if (!tlb->strict)
+ flush_type = huge_pmd_flush_type(oldpmd, entry);
if (flush_type != PTE_FLUSH_NONE)
tlb_flush_pmd_range(tlb, addr, HPAGE_PMD_SIZE,
flush_type == PTE_FLUSH_STRICT);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index ead20dc66d34..cf775f6c8c08 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -202,7 +202,9 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent);
- flush_type = pte_flush_type(oldpte, ptent);
+ flush_type = PTE_FLUSH_STRICT;
+ if (!tlb->strict)
+ flush_type = pte_flush_type(oldpte, ptent);
if (flush_type != PTE_FLUSH_NONE)
tlb_flush_pte_range(tlb, addr, PAGE_SIZE,
flush_type == PTE_FLUSH_STRICT);
--
2.25.1
Powered by blists - more mailing lists