lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e2e760d-e4b9-8bd0-a279-b23bd7841ae7@intel.com>
Date:   Wed, 4 Nov 2020 08:10:35 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     Shaokun Zhang <zhangshaokun@...ilicon.com>,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Cc:     Yuqi Jin <jinyuqi@...wei.com>,
        Rusty Russell <rusty@...tcorp.com.au>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Juergen Gross <jgross@...e.com>,
        Paul Burton <paul.burton@...s.com>,
        Michal Hocko <mhocko@...e.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Anshuman Khandual <anshuman.khandual@....com>
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

On 11/3/20 5:39 AM, Shaokun Zhang wrote:
> Currently, Intel DDIO affects only local sockets, so its performance
> improvement is due to the relative difference in performance between the
> local socket I/O and remote socket I/O.To ensure that Intel DDIO’s
> benefits are available to applications where they are most useful, the
> irq can be pinned to particular sockets using Intel DDIO.
> This arrangement is called socket affinityi. So this patch can help
> Intel DDIO work. The same I/O stash function for most processors

A great changelog would probably include a bit of context about DDIO.
Even being from Intel, I'd heard about this, but I didn't immediately
know what the acronym was.

The thing that matters here is that DDIO allows devices to use processor
caches instead of having them always do uncached accesses to main
memory.  That's a pretty important detail left out of the changelog.

> On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu
> system(0 - 1). The topology of this server is followed:

This is with a feature enabled that Intel calls sub-NUMA-clustering
(SNC), right?  Explaining *that* feature would also be great context for
why this gets triggered on your system and not normally on others and
why nobody noticed this until now.

> The IRQ from 369-392 will be bound from NUMA node0 to NUMA node3 with this
> patch, before the patch:
> 
> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
> 0
> Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list
> 1
> ...
> Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list
> 22
> Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list
> 23
> After the patch:

I _think_ what you are trying to convey here is that IRQs 369 and 370
are from devices plugged in to one socket, but their IRQs are bound to
CPUs 0 and 1 which are in the other socket.  Once device traffic leaves
the socket, it can no longer use DDIO and performance suffers.

The same situation is true for IRQs 391/392 and CPUs 22/23.

You don't come out and say it, but I assume that the root of this issue
is that once we fill up a NUMA node worth of CPUs with an affinitized
IRQ per CPU, we go looking for CPUs in other NUMA nodes.  In this case,
we have the processor in this weird mode that chops sockets into two
NUMA nodes, which makes the device's NUMA node fill up faster.

The current behavior just "wraps around" to find a new node.  But, this
wrap around behavior is nasty in this case because it might cross a socket.

> +static void calc_node_distance(int *node_dist, int node)
> +{
> +	int i;
> +
> +	for (i = 0; i < nr_node_ids; i++)
> +		node_dist[i] = node_distance(node, i);
> +}

This appears to be the only place node_dist[] is written.  That means it
always contains a one-dimensional slice of the two-dimensional data
represented by node_distance().

Why is a copy of this data needed?

> +static int find_nearest_node(int *node_dist, bool *used)
> +{
> +	int i, min_dist = node_dist[0], node_id = -1;
> +
> +	/* Choose the first unused node to compare */
> +	for (i = 0; i < nr_node_ids; i++) {
> +		if (used[i] == 0) {
> +			min_dist = node_dist[i];
> +			node_id = i;
> +			break;
> +		}
> +	}
> +
> +	/* Compare and return the nearest node */
> +	for (i = 0; i < nr_node_ids; i++) {
> +		if (node_dist[i] < min_dist && used[i] == 0) {
> +			min_dist = node_dist[i];
> +			node_id = i;
> +		}
> +	}
> +
> +	return node_id;
> +}
> +
>  /**
>   * cpumask_local_spread - select the i'th cpu with local numa cpu's first
>   * @i: index number
> @@ -206,7 +238,11 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
>   */

The diff missed some important context:

>  * This function selects an online CPU according to a numa aware policy;
>  * local cpus are returned first, followed by non-local ones, then it
>  * wraps around.

This patch changes that behavior but doesn't update the comment.


>  unsigned int cpumask_local_spread(unsigned int i, int node)
>  {
> -	int cpu, hk_flags;
> +	static DEFINE_SPINLOCK(spread_lock);
> +	static int node_dist[MAX_NUMNODES];
> +	static bool used[MAX_NUMNODES];

Not to be *too* picky, but there is a reason we declare nodemask_t as a
bitmap and not an array of bools.  Isn't this just wasteful?

> +	unsigned long flags;
> +	int cpu, hk_flags, j, id;
>  	const struct cpumask *mask;
>  
>  	hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
> @@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
>  				return cpu;
>  		}
>  	} else {
> -		/* NUMA first. */
> -		for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
> -			if (i-- == 0)
> -				return cpu;
> -		}
> +		spin_lock_irqsave(&spread_lock, flags);
> +		memset(used, 0, nr_node_ids * sizeof(bool));
> +		calc_node_distance(node_dist, node);
> +		/* Local node first then the nearest node is used */

Is this comment really correct?  This makes it sound like there is only
fallback to a single node.  Doesn't the _code_ fall back basically
without limit?

> +		for (j = 0; j < nr_node_ids; j++) {
> +			id = find_nearest_node(node_dist, used);
> +			if (id < 0)
> +				break;
>  
> -		for_each_cpu(cpu, mask) {
> -			/* Skip NUMA nodes, done above. */
> -			if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
> -				continue;
> +			for_each_cpu_and(cpu, cpumask_of_node(id), mask)
> +				if (i-- == 0) {
> +					spin_unlock_irqrestore(&spread_lock,
> +							       flags);
> +					return cpu;
> +				}
> +			used[id] = 1;
> +		}
> +		spin_unlock_irqrestore(&spread_lock, flags);

The existing code was pretty sparsely commented.  This looks to me to
make it more complicated and *less* commented.  Not the best combo.

> +		for_each_cpu(cpu, mask)
>  			if (i-- == 0)
>  				return cpu;
> -		}
>  	}
>  	BUG();
>  }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ