[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1506678805-15392-2-git-send-email-thunder.leizhen@huawei.com>
Date: Fri, 29 Sep 2017 17:53:25 +0800
From: Zhen Lei <thunder.leizhen@...wei.com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-api <linux-api@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Michal Hocko <mhocko@...e.com>, linux-mm <linux-mm@...ck.org>
CC: Tianhong Ding <dingtianhong@...wei.com>,
Hanjun Guo <guohanjun@...wei.com>,
Libin <huawei.libin@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Zhen Lei <thunder.leizhen@...wei.com>
Subject: [PATCH v2 1/1] mm: only dispaly online cpus of the numa node
When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
and display cpumask_of_node for each node), but I got different result on
X86 and arm64. For each numa node, the former only displayed online CPUs,
and the latter displayed all possible CPUs. Unfortunately, both Linux
documentation and numactl manual have not described it clear.
I sent a mail to ask for help, and Michal Hocko <mhocko@...nel.org> replied
that he preferred to print online cpus because it doesn't really make much
sense to bind anything on offline nodes.
Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
Acked-by: Michal Hocko <mhocko@...e.com>
---
drivers/base/node.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c
index 3855902..aae2402 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -27,13 +27,21 @@ static struct bus_type node_subsys = {
static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
{
+ ssize_t n;
+ cpumask_var_t mask;
struct node *node_dev = to_node(dev);
- const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
- return cpumap_print_to_pagebuf(list, buf, mask);
+ if (!alloc_cpumask_var(&mask, GFP_KERNEL))
+ return 0;
+
+ cpumask_and(mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
+ n = cpumap_print_to_pagebuf(list, buf, mask);
+ free_cpumask_var(mask);
+
+ return n;
}
static inline ssize_t node_read_cpumask(struct device *dev,
--
2.5.0
Powered by blists - more mailing lists