[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3d7a68d7-e33a-fd42-3362-5019feade7ce@huawei.com>
Date: Mon, 7 Dec 2020 20:07:48 +0800
From: Wei Li <liwei213@...wei.com>
To: Anshuman Khandual <anshuman.khandual@....com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Ard Biesheuvel <ardb@...nel.org>
CC: Barry Song <song.bao.hua@...ilicon.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Steve Capper <steve.capper@....com>,
Marc Zyngier <maz@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Catalin Marinas <catalin.marinas@....com>,
<butao@...ilicon.com>, Will Deacon <will@...nel.org>,
Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
<fengbaopeng2@...ilicon.com>, <saberlily.xia@...ilicon.com>,
<zhaojiapeng@...wei.com>
Subject: Re: [PATCH] arm64: mm: decrease the section size to reduce the memory
reserved for the page map
(+ saberlily + jiapeng)
On 2020/12/7 18:39, Anshuman Khandual wrote:
>
>
> On 12/7/20 3:34 PM, Mike Rapoport wrote:
>> On Mon, Dec 07, 2020 at 10:49:26AM +0100, Ard Biesheuvel wrote:
>>> On Mon, 7 Dec 2020 at 10:42, Mike Rapoport <rppt@...ux.ibm.com> wrote:
>>>>
>>>> On Mon, Dec 07, 2020 at 09:35:06AM +0000, Marc Zyngier wrote:
>>>>> On 2020-12-07 09:09, Ard Biesheuvel wrote:
>>>>>> (+ Marc)
>>>>>>
>>>>>> On Fri, 4 Dec 2020 at 12:14, Will Deacon <will@...nel.org> wrote:
>>>>>>>
>>>>>>> On Fri, Dec 04, 2020 at 09:44:43AM +0800, Wei Li wrote:
>>>>>>>> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
>>>>>>>> do not free the reserved memory for the page map, decrease the section
>>>>>>>> size can reduce the waste of reserved memory.
>>>>>>>>
>>>>>>>> Signed-off-by: Wei Li <liwei213@...wei.com>
>>>>>>>> Signed-off-by: Baopeng Feng <fengbaopeng2@...ilicon.com>
>>>>>>>> Signed-off-by: Xia Qing <saberlily.xia@...ilicon.com>
>>>>>>>> ---
>>>>>>>> arch/arm64/include/asm/sparsemem.h | 2 +-
>>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
>>>>>>>> index 1f43fcc79738..8963bd3def28 100644
>>>>>>>> --- a/arch/arm64/include/asm/sparsemem.h
>>>>>>>> +++ b/arch/arm64/include/asm/sparsemem.h
>>>>>>>> @@ -7,7 +7,7 @@
>>>>>>>>
>>>>>>>> #ifdef CONFIG_SPARSEMEM
>>>>>>>> #define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS
>>>>>>>> -#define SECTION_SIZE_BITS 30
>>>>>>>> +#define SECTION_SIZE_BITS 27
>>>>>>>
>>>>>>> We chose '30' to avoid running out of bits in the page flags. What
>>>>>>> changed?
>>>>>>>
>>>>>>> With this patch, I can trigger:
>>>>>>>
>>>>>>> ./include/linux/mmzone.h:1170:2: error: Allocator MAX_ORDER exceeds
>>>>>>> SECTION_SIZE
>>>>>>> #error Allocator MAX_ORDER exceeds SECTION_SIZE
>>>>>>>
>>>>>>> if I bump up NR_CPUS and NODES_SHIFT.
>>>>>>>
>>>>>>
>>>>>> Does this mean we will run into problems with the GICv3 ITS LPI tables
>>>>>> again if we are forced to reduce MAX_ORDER to fit inside
>>>>>> SECTION_SIZE_BITS?
>>>>>
>>>>> Most probably. We are already massively constraint on platforms
>>>>> such as TX1, and dividing the max allocatable range by 8 isn't
>>>>> going to make it work any better...
>>>>
>>>> I don't think MAX_ORDER should shrink. Even if SECTION_SIZE_BITS is
>>>> reduced it should accomodate the existing MAX_ORDER.
>>>>
>>>> My two pennies.
>>>>
>>>
>>> But include/linux/mmzone.h:1170 has this:
>>>
>>> #if (MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS
>>> #error Allocator MAX_ORDER exceeds SECTION_SIZE
>>> #endif
>>>
>>> and Will managed to trigger it after applying this patch.
>>
>> Right, because with 64K pages section size of 27 bits is not enough to
>> accomodate MAX_ORDER (2^13 pages of 64K).
>>
>> Which means that definition of SECTION_SIZE_BITS should take MAX_ORDER
>> into account either statically with
>>
>> #ifdef ARM64_4K_PAGES
>> #define SECTION_SIZE_BITS <a number>
>> #elif ARM64_16K_PAGES
>> #define SECTION_SIZE_BITS <a larger number>
>> #elif ARM64_64K_PAGES
>> #define SECTION_SIZE_BITS <even larger number>
>> #else
>> #error "and what is the page size?"
>> #endif
>>
>> or dynamically, like e.g. ia64 does:
>>
>> #ifdef CONFIG_FORCE_MAX_ZONEORDER
>> #if ((CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS)
>> #undef SECTION_SIZE_BITS
>> #define SECTION_SIZE_BITS (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT)
>> #endif
>
> I had proposed the same on the other thread here. But with this the
> SECTION_SIZE_BITS becomes 22 in case of 4K page size reducing to an
> extent where PMD based vmemmap mapping could not be created. Though
> have not looked into much details yet.
>
> Using CONFIG_FORCE_MAX_ZONEORDER seems to the right thing to do. But
> if that does not reasonably work for 4K pages, we might have to hard
> code it as 27 to have huge page vmemmap mappings.
> .
>
Powered by blists - more mailing lists