[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250428143352.53761-6-miko.lenczewski@arm.com>
Date: Mon, 28 Apr 2025 14:33:53 +0000
From: Mikołaj Lenczewski <miko.lenczewski@....com>
To: ryan.roberts@....com,
suzuki.poulose@....com,
yang@...amperecomputing.com,
corbet@....net,
catalin.marinas@....com,
will@...nel.org,
jean-philippe@...aro.org,
robin.murphy@....com,
joro@...tes.org,
akpm@...ux-foundation.org,
paulmck@...nel.org,
mark.rutland@....com,
joey.gouly@....com,
maz@...nel.org,
james.morse@....com,
broonie@...nel.org,
oliver.upton@...ux.dev,
baohua@...nel.org,
david@...hat.com,
ioworker0@...il.com,
jgg@...pe.ca,
nicolinc@...dia.com,
mshavit@...gle.com,
jsnitsel@...hat.com,
smostafa@...gle.com,
kevin.tian@...el.com,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
iommu@...ts.linux.dev
Cc: Mikołaj Lenczewski <miko.lenczewski@....com>
Subject: [PATCH v6 3/3] arm64/mm: Reorder tlbi in contpte_convert() under BBML2
When converting a region via contpte_convert() to use mTHP, we have two
different goals. We have to mark each entry as contiguous, and we would
like to smear the dirty and young (access) bits across all entries in
the contiguous block. Currently, we do this by first accumulating the
dirty and young bits in the block, using an atomic
__ptep_get_and_clear() and the relevant pte_{dirty,young}() calls,
performing a tlbi, and finally smearing the correct bits across the
block using __set_ptes().
This approach works fine for BBM level 0, but with support for BBM level
2 we are allowed to reorder the tlbi to after setting the pagetable
entries. We expect the time cost of a tlbi to be much greater than the
cost of clearing and resetting the PTEs. As such, this reordering of the
tlbi outside the window where our PTEs are invalid greatly reduces the
duration the PTE are visibly invalid for other threads. This reduces the
likelyhood of a concurrent page walk finding an invalid PTE, reducing
the likelyhood of a fault in other threads, and improving performance
(more so when there are more threads).
Because we support via allowlist only bbml2 implementations that never
raise conflict aborts and instead invalidate the tlb entries
automatically in hardware, we could avoid the final flush altogether.
Unfortunately, certain implementations might implement bbml2 in such a
way that avoiding the tlbi leads to a performance degradation. Thus,
to remain both correct and performant, we simply switch from
__flush_tlbi_range() to __flush_tlbi_range_nosync() and are thus
guaranteed correctness (via BBML2 semantics) and performance (leaving
the actual flush to the next DSB) in all cases.
Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@....com>
Reviewed-by: Ryan Roberts <ryan.roberts@....com>
---
arch/arm64/mm/contpte.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 55107d27d3f8..0cec1dad4922 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -68,9 +68,21 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr,
pte = pte_mkyoung(pte);
}
- __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
+ if (!system_supports_bbml2_noabort())
+ __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
__set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES);
+
+ /*
+ * Despite BBML2 allowing us to elide the following TLBI whilst remaining
+ * correct, there may be implementations where not issuing said TLBI
+ * is not performant. To handle this, we can issue a TLBI but delegate
+ * the flush to the next DSB (worst case, at the next context switch).
+ * This remains correct (due to BBML2 semantics) and fast (due to not
+ * waiting for a DSB) in all cases.
+ */
+ if (system_supports_bbml2_noabort())
+ __flush_tlb_range_nosync(&vma, start_addr, addr, PAGE_SIZE, true, 3);
}
void __contpte_try_fold(struct mm_struct *mm, unsigned long addr,
--
2.49.0
Powered by blists - more mailing lists