[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5f1b36e2-6455-44d9-97b0-253aefd5024f@arm.com>
Date: Thu, 6 Mar 2025 11:57:36 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Anshuman Khandual <anshuman.khandual@....com>,
linux-arm-kernel@...ts.infradead.org
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64/mm: Create level specific section mappings in
map_range()
On 06/03/2025 03:37, Anshuman Khandual wrote:
>
>
> On 3/4/25 21:51, Ryan Roberts wrote:
>> On 03/03/2025 04:18, Anshuman Khandual wrote:
>>> Currently PMD section mapping mask i.e PMD_TYPE_SECT is used while creating
>>> section mapping at all page table levels except the last level. This works
>>> fine as the section mapping masks are exactly the same (0x1UL) for all page
>>> table levels.
>>>
>>> This will change in the future with D128 page tables that have unique skip
>>> level values (SKL) required for creating section mapping at different page
>>> table levels. Hence use page table level specific section mapping macros
>>> instead of the common PMD_TYPE_SECT.
>>>
>>> Cc: Catalin Marinas <catalin.marinas@....com>
>>> Cc: Will Deacon <will@...nel.org>
>>> Cc: Ryan Roberts <ryan.roberts@....com>
>>> Cc: Ard Biesheuvel <ardb@...nel.org>
>>> Cc: linux-kernel@...r.kernel.org
>>> Cc: linux-arm-kernel@...ts.infradead.org
>>> Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
>>> ---
>>> This patch applies on 6.14-rc5
>>>
>>> PGD_TYPE_SECT for level -1 section map handling has been added for 4K
>>> base pages with 52 bit VA configuration that has 5 page table levels.
>>> In such cases (CONFIG_PGTABLE_LEVELS = 5) early_map_kernel() can call
>>> map_range() eventually with -1 (i.e 4 - CONFIG_PGTABLE_LEVELS) as the
>>> root_level.
>>
>> Table Table D8-16 on page D8-6459 of ARM DDI 0487 L.a says that block mappings
>> at level -1 are not permitted for 4K pages; only levels 0-3 support leaf
>> mappings. Similarly for 16K, table D8-26 says only levels 1-3 permit leaf
>> mappings. And for 64K, table D8-35 says only levels 1-3 permit leaf mappings.
>
> Then seems like the current code is actually wrong because PMD_TYPE_SECT
> is being set at all levels (except level 3) regardless of the configured
> page size ?
Yes, I think so. In that case, you should mark it with Fixes: and Cc: stable.
But I feel like there might be some subtlely here that means that the problem
can't happen in practice. Ard might know?
>
>>
>> So I don't think PGD_TYPE_SECT is the right solution. Perhaps we need to
>> explicitly force the unsupported levels to be table entries even if the
>> alignment is correct?
>
> Just wondering - something like the following will work instead ? Tested
> on both 4K and 64K page sizes.
LGTM!
>
> --- a/arch/arm64/kernel/pi/map_range.c
> +++ b/arch/arm64/kernel/pi/map_range.c
> @@ -26,6 +26,21 @@
> * @va_offset: Offset between a physical page and its current mapping
> * in the VA space
> */
> +static bool sect_supported(int level)
> +{
> + switch(level) {
> + case -1:
> + return false;
> + case 0:
> + if(IS_ENABLED(CONFIG_ARM64_16K_PAGES) ||
> + IS_ENABLED(CONFIG_ARM64_64K_PAGES))
> + return false;
> + fallthrough;
> + default:
> + return true;
> + }
> +}
> +
> void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
> int level, pte_t *tbl, bool may_use_cont, u64 va_offset)
> {
> @@ -44,13 +59,30 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
> * Set the right block/page bits for this level unless we are
> * clearing the mapping
> */
> - if (protval)
> - protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE;
> + if (protval && sect_supported(level)) {
> + switch (level) {
> + case 3:
> + protval |= PTE_TYPE_PAGE;
> + break;
> + case 2:
> + protval |= PMD_TYPE_SECT;
> + break;
> + case 1:
> + protval |= PUD_TYPE_SECT;
> + break;
> + case 0:
> + protval |= P4D_TYPE_SECT;
> + break;
> + default:
> + break;
> + }
> + }
>
> while (start < end) {
> u64 next = min((start | lmask) + 1, PAGE_ALIGN(end));
>
> - if (level < 3 && (start | next | pa) & lmask) {
> + if ((level < 3 && (start | next | pa) & lmask) ||
> + !sect_supported(level)){
> /*
> * This chunk needs a finer grained mapping. Create a
> * table mapping if necessary and recurse.
>
>>
>> Thanks,
>> Ryan
>>
>>>
>>> arch/arm64/include/asm/pgtable-hwdef.h | 1 +
>>> arch/arm64/kernel/pi/map_range.c | 23 +++++++++++++++++++++--
>>> 2 files changed, 22 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
>>> index a9136cc551cc..fd0a82e8878c 100644
>>> --- a/arch/arm64/include/asm/pgtable-hwdef.h
>>> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
>>> @@ -99,6 +99,7 @@
>>> #define PGD_TYPE_TABLE (_AT(pgdval_t, 3) << 0)
>>> #define PGD_TABLE_BIT (_AT(pgdval_t, 1) << 1)
>>> #define PGD_TYPE_MASK (_AT(pgdval_t, 3) << 0)
>>> +#define PGD_TYPE_SECT (_AT(pgdval_t, 1) << 0)
>>> #define PGD_TABLE_AF (_AT(pgdval_t, 1) << 10) /* Ignored if no FEAT_HAFT */
>>> #define PGD_TABLE_PXN (_AT(pgdval_t, 1) << 59)
>>> #define PGD_TABLE_UXN (_AT(pgdval_t, 1) << 60)
>>> diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c
>>> index 2b69e3beeef8..9ea869f5745f 100644
>>> --- a/arch/arm64/kernel/pi/map_range.c
>>> +++ b/arch/arm64/kernel/pi/map_range.c
>>> @@ -44,8 +44,27 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
>>> * Set the right block/page bits for this level unless we are
>>> * clearing the mapping
>>> */
>>> - if (protval)
>>> - protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE;
>>> + if (protval) {
>>> + switch (level) {
>>> + case 3:
>>> + protval |= PTE_TYPE_PAGE;
>>> + break;
>>> + case 2:
>>> + protval |= PMD_TYPE_SECT;
>>> + break;
>>> + case 1:
>>> + protval |= PUD_TYPE_SECT;
>>> + break;
>>> + case 0:
>>> + protval |= P4D_TYPE_SECT;
>>> + break;
>>> + case -1:
>>> + protval |= PGD_TYPE_SECT;
>>> + break;
>>> + default:
>>> + break;
>>> + }
>>> + }
>>>
>>> while (start < end) {
>>> u64 next = min((start | lmask) + 1, PAGE_ALIGN(end));
>>
Powered by blists - more mailing lists