lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0b673652-8a84-4769-a193-d090a50e91cd@nvidia.com>
Date: Fri, 10 Jan 2025 15:59:38 +0100
From: Bruno Faccini <bfaccini@...dia.com>
To: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org, rppt@...nel.org,
 ziy@...dia.com, jhubbard@...dia.com, mrusiniak@...dia.com
Subject: Re: [PATCH 0/1] mm/fake-numa: allow later numa node hotplug

Hello David,

Le 07/01/2025 à 11:08, David Hildenbrand a écrit :
> 
> Hi,
> 
>>
>> With recent M.Rapoport set of fake-numa patches in mm-everything
>> and using numa=fake=4 boot parameter :
>> $ numactl --hardware
>> available: 4 nodes (0-3)
>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 0 size: 122518 MB
>> node 0 free: 117141 MB
>> node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 1 size: 219911 MB
>> node 1 free: 219751 MB
>> node 2 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 2 size: 122599 MB
>> node 2 free: 122541 MB
>> node 3 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 3 size: 122479 MB
>> node 3 free: 122408 MB
> 
> Why are all CPUs indicated as belonging to all nodes? Is that expected
> or a BUG?

This behaviour comes from original fake-numa implementation, and has 
been left as is by M.Rapoport recent fake-numa changes.

> 
> I would have thought, just like memory, that one resource only belongs
> to one NUMA node.
"All fake-NUMA nodes that belong to a physical NUMA node share the same 
CPU cores", this was already the case in original/x86-only 
implementation so that fake-NUMA does not affect application launch 
commands.

> 
> 
>>
>> With recent M.Rapoport set of fake-numa patches in mm-everything,
>> this patch on top, using numa=fake=4 boot parameter :
>> # numactl —hardware
>> available: 12 nodes (0-11)
>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 0 size: 122518 MB
>> node 0 free: 116429 MB
>> node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 1 size: 122631 MB
>> node 1 free: 122576 MB
>> node 2 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 2 size: 122599 MB
>> node 2 free: 122544 MB
>> node 3 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>> 65 66 67 68 69 70 71
>> node 3 size: 122479 MB
>> node 3 free: 122419 MB
>> node 4 cpus:
>> node 4 size: 97280 MB
>> node 4 free: 97279 MB
> 
> 
> ^ Is this where your driver hotplugged a single node and hotplugged memory?
Yes, Node 4 is a GPU node and its memory has been hotplugged by the driver.

> 
> 
> -- 
> Cheers,
> 
> David / dhildenb
> 
Thanks for your review and comments/questions, bye,
Bruno


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ