[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <174679458910.1792901.13579160002600202326.b4-ty@kernel.org>
Date: Fri, 9 May 2025 14:55:18 +0100
From: Will Deacon <will@...nel.org>
To: Catalin Marinas <catalin.marinas@....com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
Christoph Hellwig <hch@...radead.org>,
David Hildenbrand <david@...hat.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Mark Rutland <mark.rutland@....com>,
Anshuman Khandual <anshuman.khandual@....com>,
Alexandre Ghiti <alexghiti@...osinc.com>,
Kevin Brodsky <kevin.brodsky@....com>,
Ryan Roberts <ryan.roberts@....com>
Cc: kernel-team@...roid.com,
Will Deacon <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64
On Tue, 22 Apr 2025 09:18:08 +0100, Ryan Roberts wrote:
> This is v4 of a series to improve performance for hugetlb and vmalloc on arm64.
> Although some of these patches are core-mm, advice from Andrew was to go via the
> arm64 tree. All patches are now acked/reviewed by relevant maintainers so I
> believe this should be good-to-go.
>
> The 2 key performance improvements are 1) enabling the use of contpte-mapped
> blocks in the vmalloc space when appropriate (which reduces TLB pressure). There
> were already hooks for this (used by powerpc) but they required some tidying and
> extending for arm64. And 2) batching up barriers when modifying the vmalloc
> address space for upto 30% reduction in time taken in vmalloc().
>
> [...]
Sorry for the delay in getting to this series, it all looks good.
Applied to arm64 (for-next/mm), thanks!
[01/11] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
https://git.kernel.org/arm64/c/29cb80519689
[02/11] arm64: hugetlb: Refine tlb maintenance scope
https://git.kernel.org/arm64/c/5b3f8917644e
[03/11] mm/page_table_check: Batch-check pmds/puds just like ptes
https://git.kernel.org/arm64/c/91e40668e70a
[04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
https://git.kernel.org/arm64/c/ef493d234362
[05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()
https://git.kernel.org/arm64/c/a899b7d0673c
[06/11] arm64/mm: Hoist barriers out of set_ptes_anysz() loop
https://git.kernel.org/arm64/c/f89b399e8d6e
[07/11] mm/vmalloc: Warn on improper use of vunmap_range()
https://git.kernel.org/arm64/c/61ef8ddaa35e
[08/11] mm/vmalloc: Gracefully unmap huge ptes
https://git.kernel.org/arm64/c/2fba13371fe8
[09/11] arm64/mm: Support huge pte-mapped pages in vmap
https://git.kernel.org/arm64/c/06fc959fcff7
[10/11] mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
https://git.kernel.org/arm64/c/44562c71e2cf
[11/11] arm64/mm: Batch barriers when updating kernel mappings
https://git.kernel.org/arm64/c/5fdd05efa1cd
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
Powered by blists - more mailing lists