lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <da929a90-2978-44b5-aed8-1af735176040@redhat.com>
Date: Tue, 14 Jan 2025 15:15:53 +0100
From: David Hildenbrand <david@...hat.com>
To: Bruno Faccini <bfaccini@...dia.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org, rppt@...nel.org,
 ziy@...dia.com, jhubbard@...dia.com, mrusiniak@...dia.com
Subject: Re: [PATCH 0/1] mm/fake-numa: allow later numa node hotplug

>>
>> I would have thought, just like memory, that one resource only belongs
>> to one NUMA node.
> "All fake-NUMA nodes that belong to a physical NUMA node share the same
> CPU cores", this was already the case in original/x86-only
> implementation so that fake-NUMA does not affect application launch
> commands.

Thanks! Interesting; a bit unexpected :)

> 
>>
>>
>>>
>>> With recent M.Rapoport set of fake-numa patches in mm-everything,
>>> this patch on top, using numa=fake=4 boot parameter :
>>> # numactl —hardware
>>> available: 12 nodes (0-11)
>>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>>> 65 66 67 68 69 70 71
>>> node 0 size: 122518 MB
>>> node 0 free: 116429 MB
>>> node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>>> 65 66 67 68 69 70 71
>>> node 1 size: 122631 MB
>>> node 1 free: 122576 MB
>>> node 2 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>>> 65 66 67 68 69 70 71
>>> node 2 size: 122599 MB
>>> node 2 free: 122544 MB
>>> node 3 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
>>> 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
>>> 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
>>> 65 66 67 68 69 70 71
>>> node 3 size: 122479 MB
>>> node 3 free: 122419 MB
>>> node 4 cpus:
>>> node 4 size: 97280 MB
>>> node 4 free: 97279 MB
>>
>>
>> ^ Is this where your driver hotplugged a single node and hotplugged memory?
> Yes, Node 4 is a GPU node and its memory has been hotplugged by the driver.

Thanks!

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ