[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250319143032.645686776@linuxfoundation.org>
Date: Wed, 19 Mar 2025 07:31:09 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
patches@...ts.linux.dev,
Piotr Jaroszynski <pjaroszynski@...dia.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Alistair Popple <apopple@...dia.com>,
Raghavendra Rao Ananta <rananta@...gle.com>,
SeongJae Park <sj@...nel.org>,
Jason Gunthorpe <jgg@...dia.com>,
John Hubbard <jhubbard@...dia.com>,
Nicolin Chen <nicolinc@...dia.com>,
linux-arm-kernel@...ts.infradead.org,
iommu@...ts.linux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 6.13 199/241] Fix mmu notifiers for range-based invalidates
6.13-stable review patch. If anyone has any objections, please let me know.
------------------
From: Piotr Jaroszynski <pjaroszynski@...dia.com>
commit f7edb07ad7c66eab3dce57384f33b9799d579133 upstream.
Update the __flush_tlb_range_op macro not to modify its parameters as
these are unexepcted semantics. In practice, this fixes the call to
mmu_notifier_arch_invalidate_secondary_tlbs() in
__flush_tlb_range_nosync() to use the correct range instead of an empty
range with start=end. The empty range was (un)lucky as it results in
taking the invalidate-all path that doesn't cause correctness issues,
but can certainly result in suboptimal perf.
This has been broken since commit 6bbd42e2df8f ("mmu_notifiers: call
invalidate_range() when invalidating TLBs") when the call to the
notifiers was added to __flush_tlb_range(). It predates the addition of
the __flush_tlb_range_op() macro from commit 360839027a6e ("arm64: tlb:
Refactor the core flush algorithm of __flush_tlb_range") that made the
bug hard to spot.
Fixes: 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs")
Signed-off-by: Piotr Jaroszynski <pjaroszynski@...dia.com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Will Deacon <will@...nel.org>
Cc: Robin Murphy <robin.murphy@....com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Raghavendra Rao Ananta <rananta@...gle.com>
Cc: SeongJae Park <sj@...nel.org>
Cc: Jason Gunthorpe <jgg@...dia.com>
Cc: John Hubbard <jhubbard@...dia.com>
Cc: Nicolin Chen <nicolinc@...dia.com>
Cc: linux-arm-kernel@...ts.infradead.org
Cc: iommu@...ts.linux.dev
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
Cc: stable@...r.kernel.org
Reviewed-by: Catalin Marinas <catalin.marinas@....com>
Reviewed-by: Alistair Popple <apopple@...dia.com>
Link: https://lore.kernel.org/r/20250304085127.2238030-1-pjaroszynski@nvidia.com
Signed-off-by: Will Deacon <will@...nel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm64/include/asm/tlbflush.h | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -396,33 +396,35 @@ static inline void arch_tlbbatch_flush(s
#define __flush_tlb_range_op(op, start, pages, stride, \
asid, tlb_level, tlbi_user, lpa2) \
do { \
+ typeof(start) __flush_start = start; \
+ typeof(pages) __flush_pages = pages; \
int num = 0; \
int scale = 3; \
int shift = lpa2 ? 16 : PAGE_SHIFT; \
unsigned long addr; \
\
- while (pages > 0) { \
+ while (__flush_pages > 0) { \
if (!system_supports_tlb_range() || \
- pages == 1 || \
- (lpa2 && start != ALIGN(start, SZ_64K))) { \
- addr = __TLBI_VADDR(start, asid); \
+ __flush_pages == 1 || \
+ (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
+ addr = __TLBI_VADDR(__flush_start, asid); \
__tlbi_level(op, addr, tlb_level); \
if (tlbi_user) \
__tlbi_user_level(op, addr, tlb_level); \
- start += stride; \
- pages -= stride >> PAGE_SHIFT; \
+ __flush_start += stride; \
+ __flush_pages -= stride >> PAGE_SHIFT; \
continue; \
} \
\
- num = __TLBI_RANGE_NUM(pages, scale); \
+ num = __TLBI_RANGE_NUM(__flush_pages, scale); \
if (num >= 0) { \
- addr = __TLBI_VADDR_RANGE(start >> shift, asid, \
+ addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
scale, num, tlb_level); \
__tlbi(r##op, addr); \
if (tlbi_user) \
__tlbi_user(r##op, addr); \
- start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
- pages -= __TLBI_RANGE_PAGES(num, scale); \
+ __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
+ __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
} \
scale--; \
} \
Powered by blists - more mailing lists