[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170825173433.GB26878@arm.com>
Date: Fri, 25 Aug 2017 18:34:33 +0100
From: Will Deacon <will.deacon@....com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Zhen Lei <thunder.leizhen@...wei.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-api <linux-api@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-mm <linux-mm@...ck.org>, Zefan Li <lizefan@...wei.com>,
Xinwei Hu <huxinwei@...wei.com>,
Tianhong Ding <dingtianhong@...wei.com>,
Hanjun Guo <guohanjun@...wei.com>,
Catalin Marinas <catalin.marinas@....com>
Subject: Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
> It seems this has slipped through cracks. Let's CC arm64 guys
>
> On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> > When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> > and display cpumask_of_node for each node), but I got different result on
> > X86 and arm64. For each numa node, the former only displayed online CPUs,
> > and the latter displayed all possible CPUs. Unfortunately, both Linux
> > documentation and numactl manual have not described it clear.
> >
> > I sent a mail to ask for help, and Michal Hocko <mhocko@...nel.org> replied
> > that he preferred to print online cpus because it doesn't really make much
> > sense to bind anything on offline nodes.
>
> Yes printing offline CPUs is just confusing and more so when the
> behavior is not consistent over architectures. I believe that x86
> behavior is the more appropriate one because it is more logical to dump
> the NUMA topology and use it for affinity setting than adding one
> additional step to check the cpu state to achieve the same.
>
> It is true that the online/offline state might change at any time so the
> above might be tricky on its own but if we should at least make the
> behavior consistent.
>
> > Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
>
> Acked-by: Michal Hocko <mhocko@...e.com>
The concept looks find to me, but shouldn't we use cpumask_var_t and
alloc/free_cpumask_var?
Will
> > drivers/base/node.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/base/node.c b/drivers/base/node.c
> > index 5548f96..d5e7ce7 100644
> > --- a/drivers/base/node.c
> > +++ b/drivers/base/node.c
> > @@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
> > static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
> > {
> > struct node *node_dev = to_node(dev);
> > - const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
> > + struct cpumask mask;
> > +
> > + cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> >
> > /* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
> > BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> >
> > - return cpumap_print_to_pagebuf(list, buf, mask);
> > + return cpumap_print_to_pagebuf(list, buf, &mask);
> > }
> >
> > static inline ssize_t node_read_cpumask(struct device *dev,
> > --
> > 2.5.0
> >
> >
>
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists