[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1446220119-3750-3-git-send-email-prarit@redhat.com>
Date: Fri, 30 Oct 2015 11:48:38 -0400
From: Prarit Bhargava <prarit@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: trenn@...e.de, Prarit Bhargava <prarit@...hat.com>
Subject: [PATCH 2/3] cpu hotplug, add CONFIG_PERMANENT_CPU_TOPOLOGY
The information in /sys/devices/system/cpu/cpuX/topology
directory is useful for userspace monitoring applications and in-tree
utilities like cpupower & turbostat.
When down'ing a CPU the /sys/devices/system/cpu/cpuX/topology directory is
removed during the CPU_DEAD hotplug callback in the kernel. The problem
with this model is that the CPU has not been physically removed and the
data in the topology directory is still valid and the cpu's location is
now lost to userspace.
This patch adds CONFIG_PERMANENT_CPU_TOPOLOGY, and is Y by default for
x86, an N for all other arches. When enabled the kernel is modified so
that the topology directory is added to the core cpu sysfs files so that
the topology directory exists while the CPU is physically present. When
disabled, the behavior of the current kernel is maintained (that is, the
topology directory is removed on a soft down and added on an soft up).
Adding CONFIG_PERMANENT_CPU_TOPOLOGY may require additional architecture
so that the cpumask data the CPU's topology is not cleared during a CPU
down.
Before patch:
[root@...z620-01 ~]# grep ^ /sys/devices/system/cpu/cpu10/topology/*
/sys/devices/system/cpu/cpu10/topology/core_id:3
/sys/devices/system/cpu/cpu10/topology/core_siblings:ffff
/sys/devices/system/cpu/cpu10/topology/core_siblings_list:0-15
/sys/devices/system/cpu/cpu10/topology/physical_package_id:0
/sys/devices/system/cpu/cpu10/topology/thread_siblings:0404
/sys/devices/system/cpu/cpu10/topology/thread_siblings_list:2,10
Down a cpu
[root@...z620-01 ~]# echo 0 > /sys/devices/system/cpu/cpu10/online
[root@...z620-01 ~]# ls /sys/devices/system/cpu/cpu10/topology
ls: cannot access topology: No such file or directory
After patch:
[root@...z620-01 ~]# grep ^ /sys/devices/system/cpu/cpu10/topology/*
/sys/devices/system/cpu/cpu10/topology/core_id:3
/sys/devices/system/cpu/cpu10/topology/core_siblings:ffff
/sys/devices/system/cpu/cpu10/topology/core_siblings_list:0-15
/sys/devices/system/cpu/cpu10/topology/physical_package_id:0
/sys/devices/system/cpu/cpu10/topology/thread_siblings:0404
/sys/devices/system/cpu/cpu10/topology/thread_siblings_list:2,10
Down a cpu
[root@...z620-01 ~]# echo 0 > /sys/devices/system/cpu/cpu10/online
[root@...z620-01 ~]# grep ^ /sys/devices/system/cpu/cpu10/topology/*
/sys/devices/system/cpu/cpu10/topology/core_id:3
/sys/devices/system/cpu/cpu10/topology/core_siblings:0000
/sys/devices/system/cpu/cpu10/topology/core_siblings_list:
/sys/devices/system/cpu/cpu10/topology/physical_package_id:0
/sys/devices/system/cpu/cpu10/topology/thread_siblings:0000
/sys/devices/system/cpu/cpu10/topology/thread_siblings_list:
I did some testing with and without BOOTPARAM_HOTPLUG_CPU0 enabled,
and up'd and down'd CPUs in sequence, randomly, by thread group, by
socket group and didn't see any issues.
Note: core_siblings, and thread_siblings are "numa siblings that are online"
and "thread siblings that are online" and are used as such within the kernel.
They must be zero'd out when the CPU is offline.
---
arch/x86/kernel/smpboot.c | 3 ---
drivers/base/Kconfig | 12 ++++++++++++
drivers/base/cpu.c | 8 ++++++++
3 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 892ee2e5..6591195 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1332,7 +1332,6 @@ __init void prefill_possible_map(void)
static void remove_siblinginfo(int cpu)
{
int sibling;
- struct cpuinfo_x86 *c = &cpu_data(cpu);
for_each_cpu(sibling, topology_core_cpumask(cpu)) {
cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
@@ -1350,8 +1349,6 @@ static void remove_siblinginfo(int cpu)
cpumask_clear(cpu_llc_shared_mask(cpu));
cpumask_clear(topology_sibling_cpumask(cpu));
cpumask_clear(topology_core_cpumask(cpu));
- c->phys_proc_id = 0;
- c->cpu_core_id = 0;
cpumask_clear_cpu(cpu, cpu_sibling_setup_mask);
}
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 98504ec..b3935a2 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -324,4 +324,16 @@ config CMA_ALIGNMENT
endif
+config PERMANENT_CPU_TOPOLOGY
+ bool "Permanent CPU Topology"
+ depends on HOTPLUG_CPU
+ def_bool y if X86_64
+ help
+ This option configures CPU topology to be permanent for the lifetime
+ of the CPU (until it is physically removed). Selecting Y here
+ results in the kernel reporting the physical location for offlined
+ CPUs.
+
+ If unsure, leave the default value as is.
+
endmenu
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index b939c98..9c30782 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -250,6 +250,7 @@ static struct attribute_group topology_attr_group = {
.name = "topology"
};
+#ifndef CONFIG_PERMANENT_CPU_TOPOLOGY
/* Add/Remove cpu_topology interface for CPU device */
static int topology_add_dev(unsigned int cpu)
{
@@ -306,11 +307,15 @@ out:
}
device_initcall(topology_sysfs_init);
+#endif
static const struct attribute_group *common_cpu_attr_groups[] = {
#ifdef CONFIG_KEXEC
&crash_note_cpu_attr_group,
#endif
+#ifdef CONFIG_PERMANENT_CPU_TOPOLOGY
+ &topology_attr_group,
+#endif
NULL
};
@@ -318,6 +323,9 @@ static const struct attribute_group *hotplugable_cpu_attr_groups[] = {
#ifdef CONFIG_KEXEC
&crash_note_cpu_attr_group,
#endif
+#ifdef CONFIG_PERMANENT_CPU_TOPOLOGY
+ &topology_attr_group,
+#endif
NULL
};
--
1.7.9.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists