[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191003154607.775075179@linuxfoundation.org>
Date: Thu, 3 Oct 2019 17:54:07 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Mark Rutland <mark.rutland@....com>,
Will Deacon <will@...nel.org>
Subject: [PATCH 5.3 282/344] arm64: tlb: Ensure we execute an ISB following walk cache invalidation
From: Will Deacon <will@...nel.org>
commit 51696d346c49c6cf4f29e9b20d6e15832a2e3408 upstream.
05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
added a new TLB invalidation helper which is used when freeing
intermediate levels of page table used for kernel mappings, but is
missing the required ISB instruction after completion of the TLBI
instruction.
Add the missing barrier.
Cc: <stable@...r.kernel.org>
Fixes: 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
Reviewed-by: Mark Rutland <mark.rutland@....com>
Signed-off-by: Will Deacon <will@...nel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm64/include/asm/tlbflush.h | 1 +
1 file changed, 1 insertion(+)
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -251,6 +251,7 @@ static inline void __flush_tlb_kernel_pg
dsb(ishst);
__tlbi(vaae1is, addr);
dsb(ish);
+ isb();
}
#endif
Powered by blists - more mailing lists