[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bfdcbbb0-3df0-1778-6250-99e1120bb077@xen0n.name>
Date: Wed, 13 Sep 2023 09:23:31 +0800
From: WANG Xuerui <kernel@...0n.name>
To: Huacai Chen <chenhuacai@...nel.org>
Cc: Huacai Chen <chenhuacai@...ngson.cn>, loongarch@...ts.linux.dev,
Xuefeng Li <lixuefeng@...ngson.cn>,
Guo Ren <guoren@...nel.org>,
Jiaxun Yang <jiaxun.yang@...goat.com>,
linux-kernel@...r.kernel.org, loongson-kernel@...ts.loongnix.cn,
WANG Xuerui <git@...0n.name>
Subject: Re: [PATCH] LoongArch: Set all reserved memblocks on Node#0 at
initialization
On 9/13/23 08:49, Huacai Chen wrote:
> On Wed, Sep 13, 2023 at 12:08 AM WANG Xuerui <kernel@...0n.name> wrote:
>> On 9/11/23 17:28, Huacai Chen wrote:
>>> After commit 61167ad5fecdea ("mm: pass nid to reserve_bootmem_region()")
>>> we get a panic if DEFERRED_STRUCT_PAGE_INIT is enabled:
>>>
>>> [snip]
>>>
>>> The reason is early memblock_reserve() in memblock_init() set node id
>> Why is it that only "early" but not "late" memblock_reserve() matters? I
>> failed to see the reason because the arch-specific memblock_init() isn't
>> even in the backtrace, which means that *neither* is the culprit.
> Late memblock_reserve() operates on subregions of memblock.memory
> regions. These reserved regions will be set to the correct node at the
> first iteration of memmap_init_reserved_pages().
Thanks for the clarification. According to the code behavior (and the
comment I left on the reordering change below) I'm now sure the intended
meaning is "calling memblock_reserve() after memblock_set_node() is
effectively leaving those regions with nid=MAX_NUMNODES" (or something
like that, pointing out that the memblock_set_node() call actually had
no effect in this case). "Early" and "late" in the context of init code
can be especially confusing IMO :-)
>
> Huacai
>
>>> to MAX_NUMNODES, which causes NODE_DATA(nid) be a NULL dereference in
>> "making NODE_DATA(nid) a NULL ..."
>>> reserve_bootmem_region() -> init_reserved_page(). So set all reserved
>>> memblocks on Node#0 at initialization to avoid this panic.
>>>
>>> Reported-by: WANG Xuerui <git@...0n.name>
>>> Signed-off-by: Huacai Chen <chenhuacai@...ngson.cn>
>>> ---
>>> arch/loongarch/kernel/mem.c | 4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/loongarch/kernel/mem.c b/arch/loongarch/kernel/mem.c
>>> index 4a4107a6a965..aed901c57fb4 100644
>>> --- a/arch/loongarch/kernel/mem.c
>>> +++ b/arch/loongarch/kernel/mem.c
>>> @@ -50,7 +50,6 @@ void __init memblock_init(void)
>>> }
>>>
>>> memblock_set_current_limit(PFN_PHYS(max_low_pfn));
>>> - memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0);
>>>
>>> /* Reserve the first 2MB */
>>> memblock_reserve(PHYS_OFFSET, 0x200000);
>>> @@ -58,4 +57,7 @@ void __init memblock_init(void)
>>> /* Reserve the kernel text/data/bss */
>>> memblock_reserve(__pa_symbol(&_text),
>>> __pa_symbol(&_end) - __pa_symbol(&_text));
>>> +
>>> + memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0);
>>> + memblock_set_node(0, PHYS_ADDR_MAX, &memblock.reserved, 0);
>> So the reordering is for being able to override the newly added
>> memblocks' nids to 0, and additionally doing the same for
>> memblock.reserved is the actual fix. Looks okay.
>>> }
>> And I've tested the patch on the 2-way 3C5000L server, and it now
>> correctly boots with deferred struct page init enabled. Thanks for
>> providing such a quick fix!
>>
>> Tested-by: WANG Xuerui <git@...0n.name>
>> Reviewed-by: WANG Xuerui <git@...0n.name> # with nits addressed
>>
>> --
>> WANG "xen0n" Xuerui
>>
>> Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
>>
>>
--
WANG "xen0n" Xuerui
Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
Powered by blists - more mailing lists