[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8cf1823-1ee9-4935-9293-86f58a9e2224@arm.com>
Date: Thu, 4 Sep 2025 14:16:17 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Yang Shi <yang@...amperecomputing.com>, Dev Jain <dev.jain@....com>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Ard Biesheuvel <ardb@...nel.org>, scott@...amperecomputing.com, cl@...two.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block
mapping when rodata=full
On 04/09/2025 14:14, Ryan Roberts wrote:
> On 03/09/2025 01:50, Yang Shi wrote:
>>>>>
>>>>>
>>>>> I am wondering whether we can just have a warn_on_once or something for the
>>>>> case
>>>>> when we fail to allocate a pagetable page. Or, Ryan had
>>>>> suggested in an off-the-list conversation that we can maintain a cache of PTE
>>>>> tables for every PMD block mapping, which will give us
>>>>> the same memory consumption as we do today, but not sure if this is worth it.
>>>>> x86 can already handle splitting but due to the callchains
>>>>> I have described above, it has the same problem, and the code has been working
>>>>> for years :)
>>>> I think it's preferable to avoid having to keep a cache of pgtable memory if we
>>>> can...
>>>
>>> Yes, I agree. We simply don't know how many pages we need to cache, and it
>>> still can't guarantee 100% allocation success.
>>
>> This is wrong... We can know how many pages will be needed for splitting linear
>> mapping to PTEs for the worst case once linear mapping is finalized. But it may
>> require a few hundred megabytes memory to guarantee allocation success. I don't
>> think it is worth for such rare corner case.
>
> Indeed, we know exactly how much memory we need for pgtables to map the linear
> map by pte - that's exactly what we are doing today. So we _could_ keep a cache.
> We would still get the benefit of improved performance but we would lose the
> benefit of reduced memory.
>
> I think we need to solve the vm_reset_perms() problem somehow, before we can
> enable this.
Sorry I realise this was not very clear... I am saying I think we need to fix it
somehow. A cache would likely work. But I'd prefer to avoid it if we can find a
better solution.
>
> Thanks,
> Ryan
>
>>
>> Thanks,
>> Yang
>>
>>>
>>> Thanks,
>>> Yang
>>>
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>
>>
>
Powered by blists - more mailing lists