lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181228023158.v3zvbp3k7coodctv@wfg-t540p.sh.intel.com>
Date:   Fri, 28 Dec 2018 10:31:58 +0800
From:   Fengguang Wu <fengguang.wu@...el.com>
To:     Christopher Lameter <cl@...ux.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Fan Du <fan.du@...el.com>, kvm@...r.kernel.org,
        LKML <linux-kernel@...r.kernel.org>,
        Yao Yuan <yuan.yao@...el.com>,
        Peng Dong <dongx.peng@...el.com>,
        Huang Ying <ying.huang@...el.com>,
        Liu Jingqi <jingqi.liu@...el.com>,
        Dong Eddie <eddie.dong@...el.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Zhang Yi <yi.z.zhang@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>
Subject: Re: [RFC][PATCH v2 08/21] mm: introduce and export pgdat peer_node

On Thu, Dec 27, 2018 at 08:07:26PM +0000, Christopher Lameter wrote:
>On Wed, 26 Dec 2018, Fengguang Wu wrote:
>
>> Each CPU socket can have 1 DRAM and 1 PMEM node, we call them "peer nodes".
>> Migration between DRAM and PMEM will by default happen between peer nodes.
>
>Which one does numa_node_id() point to? I guess that is the DRAM node and

Yes. In our test machine, PMEM nodes show up as memory-only nodes, so
numa_node_id() points to DRAM node.

Here is numactl --hardware output on a 2S test machine.

available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
node 0 size: 257712 MB
node 0 free: 178251 MB
node 1 cpus: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102
103
node 1 size: 258038 MB
node 1 free: 174796 MB
node 2 cpus:
node 2 size: 503999 MB
node 2 free: 438349 MB
node 3 cpus:
node 3 size: 503999 MB
node 3 free: 438349 MB
node distances:
node   0   1   2   3
  0:  10  21  20  20
  1:  21  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10

>then we fall back to the PMEM node?

Fall back is possible but not the scope of this patchset. We modified
fallback zonelists in patch 10 to simplify PMEM usage. With that
patch, page allocations on DRAM nodes won't fallback to PMEM nodes.
Instead, PMEM nodes will mainly be used by explicit numactl placement
and as migration target. When there is memory pressure in DRAM node,
LRU cold pages there will be demote migrated to its peer PMEM node on
the same socket by patch 20.

Thanks,
Fengguang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ