lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAE9FiQXihFevPcT0DouES5pqBBnTQDOu0sPTTgMqMNq6LWT9Zg@mail.gmail.com>
Date:	Fri, 28 Aug 2015 08:57:44 -0700
From:	Yinghai Lu <yinghai@...nel.org>
To:	Steffen Persvold <sp@...ascale.com>
Cc:	x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: CONFIG_HOLES_IN_ZONE and memory hot plug code on x86_64

On Thu, Aug 27, 2015 at 11:42 PM, Steffen Persvold <sp@...ascale.com> wrote:
>>Can you post whole log with SRAT related info?
>
> I can probably reproduce again and get full logs when I get run time on the system again, but here’s some output that we saved in our internal Jira case :
>
> [    0.000000] NUMA: Initialized distance table, cnt=6
> [    0.000000] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xd7ffffff] -> [mem 0x00000000-0xd7ffffff]
> [    0.000000] NUMA: Node 0 [mem 0x00000000-0xd7ffffff] + [mem 0x100000000-0x427ffffff] -> [mem 0x00000000-0x427ffffff]
> [    0.000000] NODE_DATA(0) allocated [mem 0x407fe3000-0x407ffffff]
> [    0.000000] NODE_DATA(1) allocated [mem 0x807fe3000-0x807ffffff]
> [    0.000000] NODE_DATA(2) allocated [mem 0xc07fe3000-0xc07ffffff]
> [    0.000000] NODE_DATA(3) allocated [mem 0x1007fe3000-0x1007ffffff]
> [    0.000000] NODE_DATA(4) allocated [mem 0x1407fe3000-0x1407ffffff]
> [    0.000000] NODE_DATA(5) allocated [mem 0x1807fdd000-0x1807ff9fff]
> [    0.000000]  [ffffea0000000000-ffffea00101fffff] PMD -> [ffff8803f8600000-ffff880407dfffff] on node 0
> [    0.000000]  [ffffea0010a00000-ffffea00201fffff] PMD -> [ffff8807f8600000-ffff880807dfffff] on node 1
> [    0.000000]  [ffffea0020a00000-ffffea00301fffff] PMD -> [ffff880bf8600000-ffff880c07dfffff] on node 2
> [    0.000000]  [ffffea0030a00000-ffffea00401fffff] PMD -> [ffff880ff8600000-ffff881007dfffff] on node 3
> [    0.000000]  [ffffea0040a00000-ffffea00501fffff] PMD -> [ffff8813f8600000-ffff881407dfffff] on node 4
> [    0.000000]  [ffffea0050a00000-ffffea00601fffff] PMD -> [ffff8817f7e00000-ffff8818075fffff] on node 5
>
> If I remember correctly there was a mix of 4GB and 8GB DIMMs populated on this system. In addition the firmware reserved 512MByte at the end of each memory controllers physical range (hence the reserved ranges in the e820 map).
>
> Note: this was with 4.1.0 vanilla so it could be obsolete now with 4.2-rc. I have not yet tested with your latest patches that you and Tony discussed.

We still need to see your srat table layout.

like the one from Tony's setup.

[    0.000000] SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
[    0.000000] SRAT: Node 0 PXM 0 [mem 0x100000000-0xfffffffff]
[    0.000000] SRAT: Node 0 PXM 0 [mem 0x1000000000-0x1d6fffffff]
[    0.000000] SRAT: Node 1 PXM 1 [mem 0x1d70000000-0x2c17ffffff]
[    0.000000] SRAT: Node 1 PXM 1 [mem 0x2c18000000-0x3abfffffff]
[    0.000000] SRAT: Node 2 PXM 2 [mem 0x3ac0000000-0x4967ffffff]
[    0.000000] SRAT: Node 2 PXM 2 [mem 0x4968000000-0x580fffffff]
[    0.000000] SRAT: Node 3 PXM 3 [mem 0x5810000000-0x66b7ffffff]
[    0.000000] SRAT: Node 3 PXM 3 [mem 0x66b8000000-0x755fffffff]
[    0.000000] NUMA: Initialized distance table, cnt=4
[    0.000000] NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem
0x100000000-0xfffffffff] -> [mem 0x00000000-0xfffffffff]
[    0.000000] NUMA: Node 0 [mem 0x00000000-0xfffffffff] + [mem
0x1000000000-0x1d6fffffff] -> [mem 0x00000000-0x1d6fffffff]
[    0.000000] NUMA: Node 1 [mem 0x1d70000000-0x2c17ffffff] + [mem
0x2c18000000-0x3abfffffff] -> [mem 0x1d70000000-0x3abfffffff]
[    0.000000] NUMA: Node 2 [mem 0x3ac0000000-0x4967ffffff] + [mem
0x4968000000-0x580fffffff] -> [mem 0x3ac0000000-0x580fffffff]
[    0.000000] NUMA: Node 3 [mem 0x5810000000-0x66b7ffffff] + [mem
0x66b8000000-0x755fffffff] -> [mem 0x5810000000-0x755fffffff]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ