[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0fe6f3b2-1011-4418-bc19-612a3b98c78d@arm.com>
Date: Tue, 4 Mar 2025 16:21:14 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Anshuman Khandual <anshuman.khandual@....com>,
linux-arm-kernel@...ts.infradead.org
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64/mm: Create level specific section mappings in
map_range()
On 03/03/2025 04:18, Anshuman Khandual wrote:
> Currently PMD section mapping mask i.e PMD_TYPE_SECT is used while creating
> section mapping at all page table levels except the last level. This works
> fine as the section mapping masks are exactly the same (0x1UL) for all page
> table levels.
>
> This will change in the future with D128 page tables that have unique skip
> level values (SKL) required for creating section mapping at different page
> table levels. Hence use page table level specific section mapping macros
> instead of the common PMD_TYPE_SECT.
>
> Cc: Catalin Marinas <catalin.marinas@....com>
> Cc: Will Deacon <will@...nel.org>
> Cc: Ryan Roberts <ryan.roberts@....com>
> Cc: Ard Biesheuvel <ardb@...nel.org>
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-arm-kernel@...ts.infradead.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
> ---
> This patch applies on 6.14-rc5
>
> PGD_TYPE_SECT for level -1 section map handling has been added for 4K
> base pages with 52 bit VA configuration that has 5 page table levels.
> In such cases (CONFIG_PGTABLE_LEVELS = 5) early_map_kernel() can call
> map_range() eventually with -1 (i.e 4 - CONFIG_PGTABLE_LEVELS) as the
> root_level.
Table Table D8-16 on page D8-6459 of ARM DDI 0487 L.a says that block mappings
at level -1 are not permitted for 4K pages; only levels 0-3 support leaf
mappings. Similarly for 16K, table D8-26 says only levels 1-3 permit leaf
mappings. And for 64K, table D8-35 says only levels 1-3 permit leaf mappings.
So I don't think PGD_TYPE_SECT is the right solution. Perhaps we need to
explicitly force the unsupported levels to be table entries even if the
alignment is correct?
Thanks,
Ryan
>
> arch/arm64/include/asm/pgtable-hwdef.h | 1 +
> arch/arm64/kernel/pi/map_range.c | 23 +++++++++++++++++++++--
> 2 files changed, 22 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index a9136cc551cc..fd0a82e8878c 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -99,6 +99,7 @@
> #define PGD_TYPE_TABLE (_AT(pgdval_t, 3) << 0)
> #define PGD_TABLE_BIT (_AT(pgdval_t, 1) << 1)
> #define PGD_TYPE_MASK (_AT(pgdval_t, 3) << 0)
> +#define PGD_TYPE_SECT (_AT(pgdval_t, 1) << 0)
> #define PGD_TABLE_AF (_AT(pgdval_t, 1) << 10) /* Ignored if no FEAT_HAFT */
> #define PGD_TABLE_PXN (_AT(pgdval_t, 1) << 59)
> #define PGD_TABLE_UXN (_AT(pgdval_t, 1) << 60)
> diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c
> index 2b69e3beeef8..9ea869f5745f 100644
> --- a/arch/arm64/kernel/pi/map_range.c
> +++ b/arch/arm64/kernel/pi/map_range.c
> @@ -44,8 +44,27 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
> * Set the right block/page bits for this level unless we are
> * clearing the mapping
> */
> - if (protval)
> - protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE;
> + if (protval) {
> + switch (level) {
> + case 3:
> + protval |= PTE_TYPE_PAGE;
> + break;
> + case 2:
> + protval |= PMD_TYPE_SECT;
> + break;
> + case 1:
> + protval |= PUD_TYPE_SECT;
> + break;
> + case 0:
> + protval |= P4D_TYPE_SECT;
> + break;
> + case -1:
> + protval |= PGD_TYPE_SECT;
> + break;
> + default:
> + break;
> + }
> + }
>
> while (start < end) {
> u64 next = min((start | lmask) + 1, PAGE_ALIGN(end));
Powered by blists - more mailing lists