lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Sep 2012 14:33:11 +0900
From:	Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>
To:	<x86@...nel.org>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: hot-added cpu is not asiggned to the correct node

When I hot-added CPUs and memories simultaneously using container driver,
all the hot-added CPUs were mistakenly assigned to node0.

Accoding to my DSDT, hot-added CPUs and memorys have PXM#1. So in my system,
these devices should be assigned to node1 as follows:

--- Expected result
ls /sys/devices/system/node/node1/:
cpu16 cpu17 cpu18 cpu19 cpu20 cpu21 cpu22 cpu23 cpu24 cpu25 cpu26 cpu27
cpu28 cpu29 cpu30 cpu31 cpulist ... memory512 memory513 - 767 meminfo ...

=> hot-added CPUs and memorys are assigned to same node.
---

But in actuality, the CPUs were assigned to node0 and the memorys were assigned
to node1 as follows:

--- Actual result
ls /sys/devices/system/node/node0/:
cpu0 cpu1 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9 cpu10 cpu11 cpu12 cpu13
cpu14 cpu15 cpu16 cpu17 cpu18 cpu19 cpu20 cpu21 cpu22 cpu23 cpu24 cpu25 cpu26
cpu27 cpu28 cpu29 cpu30 cpu31 cpulist ... memory1 memory2 - 255 meminfo ...

ls /sys/devices/system/node/node1/:
cpulist memory512 memory513 - 767 meminfo ...

=> hot-added CPUs are assinged to node0 and hot-added memorys are assigned to
   node1. CPUs and memorys has same PXM#. But assigned node is different.
---

In my investigation, "acpi_map_cpu2node()" causes the problem.

---
#arch/x86/kernel/acpi/boot.c"
static void __cpuinit acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
 {
 #ifdef CONFIG_ACPI_NUMA
   int nid;

   nid = acpi_get_node(handle);
   if (nid == -1 || !node_online(nid))
           return;
   set_apicid_to_node(physid, nid);
   numa_set_node(cpu, nid);
 #endif
 }
---

In my DSDT, CPUs were written ahead of memories, so CPUs were hot-added
before memories. Thus the system has memory-less-node temporarily .
In this case, "node_online()" fails. So the CPU is assigned to node 0.

When I wrote memories ahead of CPUs in DSDT, the CPUs were assigned to the
correct node. In current Linux, the CPUs were assigned to the correct node
or not depends on the order of hot-added resources in DSDT.

ACPI specification doesn't define the order of hot-added resources. So I think
the kernel should properly handle any DSDT conformable to its specification.

I'm thinking a solution about the problem, but I don't have any good idea...
Does anyone has opinion how we should treat it?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ