[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220718120212.3180-9-namit@vmware.com>
Date: Mon, 18 Jul 2022 05:02:06 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Nadav Amit <namit@...are.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Peter Xu <peterx@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Nick Piggin <npiggin@...il.com>
Subject: [RFC PATCH 08/14] x86/mm: introduce flush_tlb_fix_spurious_fault
From: Nadav Amit <namit@...are.com>
The next patches introduce relaxed TLB flushes for x86, which would
require a full TLB flush upon spurious page-fault. If a spurious
page-fault occurs on x86, check if the local TLB generation is out of
sync and perform a TLB flush if needed.
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Peter Xu <peterx@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will@...nel.org>
Cc: Yu Zhao <yuzhao@...gle.com>
Cc: Nick Piggin <npiggin@...il.com>
Signed-off-by: Nadav Amit <namit@...are.com>
---
arch/x86/include/asm/pgtable.h | 4 +++-
arch/x86/mm/tlb.c | 17 +++++++++++++++++
2 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 44e2d6f1dbaa..1fbdaff1bb7a 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1079,7 +1079,9 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);
}
-#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)
+extern void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
+ unsigned long address);
+#define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault
#define mk_pmd(page, pgprot) pfn_pmd(page_to_pfn(page), (pgprot))
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index d400b6d9d246..ff3bcc55435e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -955,6 +955,23 @@ static void put_flush_tlb_info(void)
#endif
}
+void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+ u64 mm_tlb_gen = atomic64_read(&vma->vm_mm->context.tlb_gen);
+ u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
+ struct flush_tlb_info *info;
+
+ if (local_tlb_gen == mm_tlb_gen)
+ return;
+
+ preempt_disable();
+ info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, 0);
+ flush_tlb_func(info);
+ preempt_enable();
+}
+
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned int stride_shift,
bool freed_tables)
--
2.25.1
Powered by blists - more mailing lists