[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250829153510.2401161-1-ryan.roberts@arm.com>
Date: Fri, 29 Aug 2025 16:35:06 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
James Morse <james.morse@....com>
Cc: Ryan Roberts <ryan.roberts@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH v1 0/2] Don't broadcast TLBI if mm was only active on local CPU
Hi All,
This is an RFC for my implementation of an idea from James Morse to avoid
broadcasting TBLIs to remote CPUs if it can be proven that no remote CPU could
have ever observed the pgtable entry for the TLB entry that is being
invalidated. It turns out that x86 does something similar in principle.
The primary feedback I'm looking for is; is this actually correct and safe?
James and I both believe it to be, but it would be useful to get further
validation.
Beyond that, the next question is; does it actually improve performance?
stress-ng's --tlb-shootdown stressor suggests yes; as concurrency increases, we
do a much better job of sustaining the overall number of "tlb shootdowns per
second" after the change:
+------------+--------------------------+--------------------------+--------------------------+
| | Baseline (v6.15) | tlbi local | Improvement |
+------------+-------------+------------+-------------+------------+-------------+------------+
| nr_threads | ops/sec | ops/sec | ops/sec | ops/sec | ops/sec | ops/sec |
| | (real time) | (cpu time) | (real time) | (cpu time) | (real time) | (cpu time) |
+------------+-------------+------------+-------------+------------+-------------+------------+
| 1 | 9109 | 2573 | 8903 | 3653 | -2% | 42% |
| 4 | 8115 | 1299 | 9892 | 1059 | 22% | -18% |
| 8 | 5119 | 477 | 11854 | 1265 | 132% | 165% |
| 16 | 4796 | 286 | 14176 | 821 | 196% | 187% |
| 32 | 1593 | 38 | 15328 | 474 | 862% | 1147% |
| 64 | 1486 | 19 | 8096 | 131 | 445% | 589% |
| 128 | 1315 | 16 | 8257 | 145 | 528% | 806% |
+------------+-------------+------------+-------------+------------+-------------+------------+
But looking at real-world benchmarks, I haven't yet found anything where it
makes a huge difference; When compiling the kernel, it reduces kernel time by
~2.2%, but overall wall time remains the same. I'd be interested in any
suggestions for workloads where this might prove valuable.
All mm selftests have been run and no regressions are observed. Applies on
v6.17-rc3.
Thanks,
Ryan
Ryan Roberts (2):
arm64: tlbflush: Move invocation of __flush_tlb_range_op() to a macro
arm64: tlbflush: Don't broadcast if mm was only active on local cpu
arch/arm64/include/asm/mmu.h | 12 +++
arch/arm64/include/asm/mmu_context.h | 2 +
arch/arm64/include/asm/tlbflush.h | 116 ++++++++++++++++++++++++---
arch/arm64/mm/context.c | 30 ++++++-
4 files changed, 145 insertions(+), 15 deletions(-)
--
2.43.0
Powered by blists - more mailing lists