[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220405070412.859138892@linuxfoundation.org>
Date: Tue, 5 Apr 2022 09:15:26 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Max Filippov <jcmvbkbc@...il.com>
Subject: [PATCH 5.17 0179/1126] xtensa: define update_mmu_tlb function
From: Max Filippov <jcmvbkbc@...il.com>
commit 1c4664faa38923330d478f046dc743a00c1e2dec upstream.
Before the commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault()
codepaths") there was a call to update_mmu_cache in alloc_set_pte that
used to invalidate TLB entry caching invalid PTE that caused a page
fault. That commit removed that call so now invalid TLB entry survives
causing repetitive page faults on the CPU that took the initial fault
until that TLB entry is occasionally evicted. This issue is spotted by
the xtensa TLB sanity checker.
Fix this issue by defining update_mmu_tlb function that flushes TLB entry
for the faulting address.
Cc: stable@...r.kernel.org # 5.12+
Signed-off-by: Max Filippov <jcmvbkbc@...il.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/xtensa/include/asm/pgtable.h | 4 ++++
arch/xtensa/mm/tlb.c | 6 ++++++
2 files changed, 10 insertions(+)
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -411,6 +411,10 @@ extern void update_mmu_cache(struct vm_
typedef pte_t *pte_addr_t;
+void update_mmu_tlb(struct vm_area_struct *vma,
+ unsigned long address, pte_t *ptep);
+#define __HAVE_ARCH_UPDATE_MMU_TLB
+
#endif /* !defined (__ASSEMBLY__) */
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
--- a/arch/xtensa/mm/tlb.c
+++ b/arch/xtensa/mm/tlb.c
@@ -162,6 +162,12 @@ void local_flush_tlb_kernel_range(unsign
}
}
+void update_mmu_tlb(struct vm_area_struct *vma,
+ unsigned long address, pte_t *ptep)
+{
+ local_flush_tlb_page(vma, address);
+}
+
#ifdef CONFIG_DEBUG_TLB_SANITY
static unsigned get_pte_for_vaddr(unsigned vaddr)
Powered by blists - more mailing lists