[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9a9d9e82-2919-4c93-92c2-34e29f71044e@redhat.com>
Date: Tue, 1 Apr 2025 15:38:31 +0200
From: David Hildenbrand <david@...hat.com>
To: Mikołaj Lenczewski <miko.lenczewski@....com>,
ryan.roberts@....com, suzuki.poulose@....com, yang@...amperecomputing.com,
corbet@....net, catalin.marinas@....com, will@...nel.org,
jean-philippe@...aro.org, robin.murphy@....com, joro@...tes.org,
akpm@...ux-foundation.org, ardb@...nel.org, mark.rutland@....com,
joey.gouly@....com, maz@...nel.org, james.morse@....com, broonie@...nel.org,
oliver.upton@...ux.dev, baohua@...nel.org, ioworker0@...il.com,
jgg@...pe.ca, nicolinc@...dia.com, mshavit@...gle.com, jsnitsel@...hat.com,
smostafa@...gle.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
iommu@...ts.linux.dev
Subject: Re: [PATCH v5 3/3] arm64/mm: Elide tlbi in contpte_convert() under
BBML2
On 25.03.25 10:36, Mikołaj Lenczewski wrote:
> When converting a region via contpte_convert() to use mTHP, we have two
> different goals. We have to mark each entry as contiguous, and we would
> like to smear the dirty and young (access) bits across all entries in
> the contiguous block. Currently, we do this by first accumulating the
> dirty and young bits in the block, using an atomic
> __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls,
> performing a tlbi, and finally smearing the correct bits across the
> block using __set_ptes().
>
> This approach works fine for BBM level 0, but with support for BBM level
> 2 we are allowed to reorder the tlbi to after setting the pagetable
> entries. We expect the time cost of a tlbi to be much greater than the
> cost of clearing and resetting the PTEs. As such, this reordering of the
> tlbi outside the window where our PTEs are invalid greatly reduces the
> duration the PTE are visibly invalid for other threads. This reduces the
> likelyhood of a concurrent page walk finding an invalid PTE, reducing
> the likelyhood of a fault in other threads, and improving performance
> (more so when there are more threads).
>
> Because we support via allowlist only bbml2 implementations that never
> raise conflict aborts and instead invalidate the tlb entries
> automatically in hardware, we can avoid the final flush altogether.
> Avoiding flushes is a win.
>
> Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@....com>
> Reviewed-by: Ryan Roberts <ryan.roberts@....com>
> ---
> arch/arm64/mm/contpte.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index 55107d27d3f8..77ed03b30b72 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -68,7 +68,8 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr,
> pte = pte_mkyoung(pte);
> }
>
> - __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
> + if (!system_supports_bbml2_noabort())
> + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
>
> __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES);
> }
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists