[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191105173359.39052327cf221d9c4b26b783@linux-foundation.org>
Date: Tue, 5 Nov 2019 17:33:59 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: Shaokun Zhang <zhangshaokun@...ilicon.com>,
linux-kernel@...r.kernel.org, yuqi jin <jinyuqi@...wei.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Paul Burton <paul.burton@...s.com>,
Michael Ellerman <mpe@...erman.id.au>,
Anshuman Khandual <anshuman.khandual@....com>
Subject: Re: [PATCH v2] lib: optimize cpumask_local_spread()
On Tue, 5 Nov 2019 08:01:41 +0100 Michal Hocko <mhocko@...nel.org> wrote:
> On Mon 04-11-19 18:27:48, Shaokun Zhang wrote:
> > From: yuqi jin <jinyuqi@...wei.com>
> >
> > In the multi-processor and NUMA system, I/O device may have many numa
> > nodes belonging to multiple cpus. When we get a local numa, it is
> > better to find the node closest to the local numa node, instead
> > of choosing any online cpu immediately.
> >
> > For the current code, it only considers the local NUMA node and it
> > doesn't compute the distances between different NUMA nodes for the
> > non-local NUMA nodes. Let's optimize it and find the nearest node
> > through NUMA distance. The performance will be better if it return
> > the nearest node than the random node.
>
> Numbers please
The changelog had
: When Parameter Server workload is tested using NIC device on Huawei
: Kunpeng 920 SoC:
: Without the patch, the performance is 22W QPS;
: Added this patch, the performance become better and it is 26W QPS.
> [...]
> > +/**
> > + * cpumask_local_spread - select the i'th cpu with local numa cpu's first
> > + * @i: index number
> > + * @node: local numa_node
> > + *
> > + * This function selects an online CPU according to a numa aware policy;
> > + * local cpus are returned first, followed by the nearest non-local ones,
> > + * then it wraps around.
> > + *
> > + * It's not very efficient, but useful for setup.
> > + */
> > +unsigned int cpumask_local_spread(unsigned int i, int node)
> > +{
> > + int node_dist[MAX_NUMNODES] = {0};
> > + bool used[MAX_NUMNODES] = {0};
>
> Ugh. This might be a lot of stack space. Some distro kernels use large
> NODE_SHIFT (e.g 10 so this would be 4kB of stack space just for the
> node_dist).
Yes, that's big. From a quick peek I suspect we could get by using an
array of unsigned shorts here but that might be fragile over time even
if it works now?
Perhaps we could make it a statically allocated array and protect the
entire thing with a spin_lock_irqsave()? It's not a frequently called
function.
Powered by blists - more mailing lists