lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Feb 2023 18:13:53 +0800
From:   Qi Zheng <arch0.zheng@...il.com>
To:     David Hildenbrand <david@...hat.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Qi Zheng <zhengqi.arch@...edance.com>,
        akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Teng Hu <huteng.ht@...edance.com>,
        Mike Rapoport <rppt@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Oscar Salvador <osalvador@...e.de>,
        Muchun Song <muchun.song@...ux.dev>
Subject: Re: [PATCH] mm: page_alloc: don't allocate page from memoryless nodes



On 2023/2/14 17:17, David Hildenbrand wrote:
> On 14.02.23 09:42, Vlastimil Babka wrote:
>> On 2/13/23 12:00, Qi Zheng wrote:
>>>
>>>
>>> On 2023/2/13 16:47, Vlastimil Babka wrote:
>>>> On 2/12/23 12:03, Qi Zheng wrote:
>>>>> In x86, numa_register_memblks() is only interested in
>>>>> those nodes which have enough memory, so it skips over
>>>>> all nodes with memory below NODE_MIN_SIZE (treated as
>>>>> a memoryless node). Later on, we will initialize these
>>>>> memoryless nodes (allocate pgdat in free_area_init()
>>>>> and build zonelist etc), and will online these nodes
>>>>> in init_cpu_to_node() and init_gi_nodes().
>>>>>
>>>>> After boot, these memoryless nodes are in N_ONLINE
>>>>> state but not in N_MEMORY state. But we can still allocate
>>>>> pages from these memoryless nodes.
>>>>>
>>>>> In SLUB, we only process nodes in the N_MEMORY state,
>>>>> such as allocating their struct kmem_cache_node. So if
>>>>> we allocate a page from the memoryless node above to
>>>>> SLUB, the struct kmem_cache_node of the node corresponding
>>>>> to this page is NULL, which will cause panic.
>>>>>
>>>>> For example, if we use qemu to start a two numa node kernel,
>>>>> one of the nodes has 2M memory (less than NODE_MIN_SIZE),
>>>>> and the other node has 2G, then we will encounter the
>>>>> following panic:
>>>>>
>>>>> [    0.149844] BUG: kernel NULL pointer dereference, address: 
>>>>> 0000000000000000
>>>>> [    0.150783] #PF: supervisor write access in kernel mode
>>>>> [    0.151488] #PF: error_code(0x0002) - not-present page
>>>>> <...>
>>>>> [    0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40
>>>>> <...>
>>>>> [    0.169781] Call Trace:
>>>>> [    0.170159]  <TASK>
>>>>> [    0.170448]  deactivate_slab+0x187/0x3c0
>>>>> [    0.171031]  ? bootstrap+0x1b/0x10e
>>>>> [    0.171559]  ? preempt_count_sub+0x9/0xa0
>>>>> [    0.172145]  ? kmem_cache_alloc+0x12c/0x440
>>>>> [    0.172735]  ? bootstrap+0x1b/0x10e
>>>>> [    0.173236]  bootstrap+0x6b/0x10e
>>>>> [    0.173720]  kmem_cache_init+0x10a/0x188
>>>>> [    0.174240]  start_kernel+0x415/0x6ac
>>>>> [    0.174738]  secondary_startup_64_no_verify+0xe0/0xeb
>>>>> [    0.175417]  </TASK>
>>>>> [    0.175713] Modules linked in:
>>>>> [    0.176117] CR2: 0000000000000000
>>>>>
>>>>> In addition, we can also encountered this panic in the actual
>>>>> production environment. We set up a 2c2g container with two
>>>>> numa nodes, and then reserved 128M for kdump, and then we
>>>>> can encountered the above panic in the kdump kernel.
>>>>>
>>>>> To fix it, we can filter memoryless nodes when allocating
>>>>> pages.
>>>>>
>>>>> Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
>>>>> Reported-by: Teng Hu <huteng.ht@...edance.com>
>>>>
>>>> Well AFAIK the key mechanism to only allocate from "good" nodes is the
>>>> zonelist, we shouldn't need to start putting extra checks like this. 
>>>> So it
>>>> seems to me that the code building the zonelists should take the
>>>> NODE_MIN_SIZE constraint in mind.
>>>
>>> Indeed. How about the following patch:
>>
>> +Cc also David, forgot earlier.
>>
>> Looks good to me, at least.
>>
>>> @@ -6382,8 +6378,11 @@ int find_next_best_node(int node, nodemask_t
>>> *used_node_mask)
>>>           int min_val = INT_MAX;
>>>           int best_node = NUMA_NO_NODE;
>>>
>>> -       /* Use the local node if we haven't already */
>>> -       if (!node_isset(node, *used_node_mask)) {
>>> +       /*
>>> +        * Use the local node if we haven't already. But for memoryless
>>> local
>>> +        * node, we should skip it and fallback to other nodes.
>>> +        */
>>> +       if (!node_isset(node, *used_node_mask) && node_state(node,
>>> N_MEMORY)) {
>>>                   node_set(node, *used_node_mask);
>>>                   return node;
>>>           }
>>>
>>> For memoryless node, we skip it and fallback to other nodes when
>>> build its zonelists.
>>>
>>> Say we have node0 and node1, and node0 is memoryless, then:
>>>
>>> [    0.102400] Fallback order for Node 0: 1
>>> [    0.102931] Fallback order for Node 1: 1
>>>
>>> In this way, we will not allocate pages from memoryless node0.
>>>
> 
> In offline_pages(), we'll first build_all_zonelists() to then 
> node_states_clear_node()->node_clear_state(node, N_MEMORY);
> 
> So at least on the offlining path, we wouldn't detect it properly yet I 
> assume, and build a zonelist that contains a now-memory-less node?

Without this patch, seems like the node_states_clear_node() should have
been called before build_all_zonelists()? Otherwise, the node whose
N_MEMORY state is about to be cleared will still be established in the
fallback list of other nodes.

> 

-- 
Thanks,
Qi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ