lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a44c44a-141c-363d-c23e-558edc23b9b4@redhat.com>
Date:   Wed, 8 Dec 2021 09:24:39 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Alexey Makhalov <amakhalov@...are.com>,
        Dennis Zhou <dennis@...nel.org>,
        Eric Dumazet <eric.dumazet@...il.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Oscar Salvador <osalvador@...e.de>, Tejun Heo <tj@...nel.org>,
        Christoph Lameter <cl@...ux.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH v3] mm: fix panic in __alloc_pages

On 08.12.21 09:12, Michal Hocko wrote:
> On Tue 07-12-21 19:03:28, David Hildenbrand wrote:
>> On 07.12.21 18:17, Alexey Makhalov wrote:
>>>
>>>
>>>> On Dec 7, 2021, at 9:13 AM, David Hildenbrand <david@...hat.com> wrote:
>>>>
>>>> On 07.12.21 18:02, Alexey Makhalov wrote:
>>>>>
>>>>>
>>>>>> On Dec 7, 2021, at 8:36 AM, Michal Hocko <mhocko@...e.com> wrote:
>>>>>>
>>>>>> On Tue 07-12-21 17:27:29, Michal Hocko wrote:
>>>>>> [...]
>>>>>>> So your proposal is to drop set_node_online from the patch and add it as
>>>>>>> a separate one which handles
>>>>>>> 	- sysfs part (i.e. do not register a node which doesn't span a
>>>>>>> 	  physical address space)
>>>>>>> 	- hotplug side of (drop the pgd allocation, register node lazily
>>>>>>> 	  when a first memblocks are registered)
>>>>>>
>>>>>> In other words, the first stage
>>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>>>> index c5952749ad40..f9024ba09c53 100644
>>>>>> --- a/mm/page_alloc.c
>>>>>> +++ b/mm/page_alloc.c
>>>>>> @@ -6382,7 +6382,11 @@ static void __build_all_zonelists(void *data)
>>>>>> 	if (self && !node_online(self->node_id)) {
>>>>>> 		build_zonelists(self);
>>>>>> 	} else {
>>>>>> -		for_each_online_node(nid) {
>>>>>> +		/*
>>>>>> +		 * All possible nodes have pgdat preallocated
>>>>>> +		 * free_area_init
>>>>>> +		 */
>>>>>> +		for_each_node(nid) {
>>>>>> 			pg_data_t *pgdat = NODE_DATA(nid);
>>>>>>
>>>>>> 			build_zonelists(pgdat);
>>>>>
>>>>> Will it blow up memory usage for the nodes which might never be onlined?
>>>>> I prefer the idea of init on demand.
>>>>>
>>>>> Even now there is an existing problem.
>>>>> In my experiments, I observed _huge_ memory consumption increase by increasing number
>>>>> of possible numa nodes. I’m going to report it in separate mail thread.
>>>>
>>>> I already raised that PPC might be problematic in that regard. Which
>>>> architecture / setup do you have in mind that can have a lot of possible
>>>> nodes?
>>>>
>>> It is x86_64 VMware VM, not the regular one, but specially configured (1 vCPU per node,
>>> with hot-plug support, 128 possible nodes)  
>>
>> I thought the pgdat would be smaller but I just gave it a test:
> 
> Yes, pgdat is quite large! Just embeded zones can eat a lot.
> 
>> On my system, pgdata_t is 173824 bytes. So 128 nodes would correspond to
>> 21 MiB, which is indeed a lot. I assume it's due to "struct zonelist",
>> which has MAX_ZONES_PER_ZONELIST == (MAX_NUMNODES * MAX_NR_ZONES) zone
>> references ...
> 
> This is what pahole tells me
> struct pglist_data {
>         struct zone                node_zones[4] __attribute__((__aligned__(64))); /*     0  5632 */
>         /* --- cacheline 88 boundary (5632 bytes) --- */
>         struct zonelist            node_zonelists[1];    /*  5632    80 */
> 	[...]
>         /* size: 6400, cachelines: 100, members: 27 */
>         /* sum members: 6369, holes: 5, sum holes: 31 */
> 
> with my particular config (which is !NUMA). I haven't really checked
> whether there are other places which might scale with MAX_NUM_NODES or
> something like that.
> 
> Anyway, is 21MB of wasted space for 128 Node machine something really
> note worthy?
> 

I think we'll soon might see setups (again, CXL is an example, but als
owhen providing a dynamic amount of performance differentiated memory
via virtio-mem) where this will most probably matter. With performance
differentiated memory we'll see a lot more nodes getting used in
general, and a lot more nodes eventually getting hotplugged.

If 128 nodes is realistic, I cannot tell.

We could optimize by allocating some members dynamically. For example
we'll never need MAX_NUMNODES entries, but only the number of possible
nodes.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ