lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1f92a35-f7a4-8710-9a1a-21561e76f5ff@hisilicon.com>
Date:   Wed, 6 Nov 2019 10:49:19 +0800
From:   Shaokun Zhang <zhangshaokun@...ilicon.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>
CC:     <linux-kernel@...r.kernel.org>, yuqi jin <jinyuqi@...wei.com>,
        "Mike Rapoport" <rppt@...ux.ibm.com>,
        Paul Burton <paul.burton@...s.com>,
        "Michael Ellerman" <mpe@...erman.id.au>,
        Anshuman Khandual <anshuman.khandual@....com>
Subject: Re: [PATCH v2] lib: optimize cpumask_local_spread()

Hi Andrew,

On 2019/11/6 9:33, Andrew Morton wrote:
> On Tue, 5 Nov 2019 08:01:41 +0100 Michal Hocko <mhocko@...nel.org> wrote:
> 
>> On Mon 04-11-19 18:27:48, Shaokun Zhang wrote:
>>> From: yuqi jin <jinyuqi@...wei.com>
>>>
>>> In the multi-processor and NUMA system, I/O device may have many numa
>>> nodes belonging to multiple cpus. When we get a local numa, it is
>>> better to find the node closest to the local numa node, instead
>>> of choosing any online cpu immediately.
>>>
>>> For the current code, it only considers the local NUMA node and it
>>> doesn't compute the distances between different NUMA nodes for the
>>> non-local NUMA nodes. Let's optimize it and find the nearest node
>>> through NUMA distance. The performance will be better if it return
>>> the nearest node than the random node.
>>
>> Numbers please
> 
> The changelog had
> 
> : When Parameter Server workload is tested using NIC device on Huawei
> : Kunpeng 920 SoC:
> : Without the patch, the performance is 22W QPS;
> : Added this patch, the performance become better and it is 26W QPS.
> 
>> [...]
>>> +/**
>>> + * cpumask_local_spread - select the i'th cpu with local numa cpu's first
>>> + * @i: index number
>>> + * @node: local numa_node
>>> + *
>>> + * This function selects an online CPU according to a numa aware policy;
>>> + * local cpus are returned first, followed by the nearest non-local ones,
>>> + * then it wraps around.
>>> + *
>>> + * It's not very efficient, but useful for setup.
>>> + */
>>> +unsigned int cpumask_local_spread(unsigned int i, int node)
>>> +{
>>> +	int node_dist[MAX_NUMNODES] = {0};
>>> +	bool used[MAX_NUMNODES] = {0};
>>
>> Ugh. This might be a lot of stack space. Some distro kernels use large
>> NODE_SHIFT (e.g 10 so this would be 4kB of stack space just for the
>> node_dist).
> 
> Yes, that's big.  From a quick peek I suspect we could get by using an
> array of unsigned shorts here but that might be fragile over time even
> if it works now?
> 

Yes, how about we define another macro and its value is 128(not sure it
is big enough for the actual need)?

--->8
 unsigned int cpumask_local_spread(unsigned int i, int node)
 {
-       int node_dist[MAX_NUMNODES] = {0};
-       bool used[MAX_NUMNODES] = {0};
+       #define NUMA_NODE_NR     128
+       int node_dist[NUMA_NODE_NR] = {0};
+       bool used[NUMA_NODE_NR] = {0};
        int cpu, j, id;

        /* Wrap: we always want a cpu. */
@@ -278,7 +279,7 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
                        if (i-- == 0)
                                return cpu;
        } else {
-               if (nr_node_ids > MAX_NUMNODES)
+               if (nr_node_ids > NUMA_NODE_NR)
                        return __cpumask_local_spread(i, node);

                calc_node_distance(node_dist, node);

> Perhaps we could make it a statically allocated array and protect the
> entire thing with a spin_lock_irqsave()?  It's not a frequently called

It's another way to solve this issue. I'm not sure you and Michal like which one. ;-)

Thanks,
Shaokun

> function.
> 
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ