[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1b94d8ce-ab6c-417d-af4e-cd49bf297b60@redhat.com>
Date: Tue, 10 Jun 2025 15:14:47 +0200
From: David Hildenbrand <david@...hat.com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org,
catalin.marinas@....com, will@...nel.org
Cc: lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
rppt@...nel.org, surenb@...gle.com, mhocko@...e.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, suzuki.poulose@....com, steven.price@....com,
gshan@...hat.com, linux-arm-kernel@...ts.infradead.org,
yang@...amperecomputing.com, ryan.roberts@....com, anshuman.khandual@....com
Subject: Re: [PATCH v2 2/2] arm64: pageattr: Use walk_page_range_novma() to
change memory permissions
On 10.06.25 13:44, Dev Jain wrote:
Subject seems outdated: walk_kernel_page_table_range()
> Since apply_to_page_range does not support operations on block mappings,
> use the generic pagewalk API to enable changing permissions for kernel
> block mappings. This paves the path for enabling huge mappings by default
> on kernel space mappings, thus leading to more efficient TLB usage.
>
> We only require that the start and end of a given range lie on leaf mapping
> boundaries. Return EINVAL in case a partial block mapping is detected; add
> a corresponding comment in ___change_memory_common() to warn that
> eliminating such a condition is the responsibility of the caller.
>
> apply_to_page_range ultimately uses the lazy MMU hooks at the pte level
> function (apply_to_pte_range) - we want to use this functionality after
> this patch too. Ryan says:
> "The only reason we traditionally confine the lazy mmu mode to a single
> page table is because we want to enclose it within the PTL. But that
> requirement doesn't stand for kernel mappings. As long as the walker can
> guarantee that it doesn't allocate any memory (because with certain debug
> settings that can cause lazy mmu nesting) or try to sleep then I think we
> can just bracket the entire call."
> Therefore, wrap the call to walk_kernel_page_table_range() with the
> lazy MMU helpers.
>
> Signed-off-by: Dev Jain <dev.jain@....com>
> ---
> arch/arm64/mm/pageattr.c | 158 +++++++++++++++++++++++++++++++--------
> 1 file changed, 126 insertions(+), 32 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 04d4a8f676db..2c118c0922ef 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -8,6 +8,7 @@
> #include <linux/mem_encrypt.h>
> #include <linux/sched.h>
> #include <linux/vmalloc.h>
> +#include <linux/pagewalk.h>
>
> #include <asm/cacheflush.h>
> #include <asm/pgtable-prot.h>
> @@ -20,6 +21,100 @@ struct page_change_data {
> pgprot_t clear_mask;
> };
>
The general idea looks sane to me.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists