lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 21 Feb 2023 10:23:19 +0100
From:   Paul Menzel <pmenzel@...gen.mpg.de>
To:     Pawel Chmielewski <pawel.chmielewski@...el.com>
Cc:     netdev@...r.kernel.org, intel-wired-lan@...osl.org
Subject: Re: [Intel-wired-lan] [PATCH net-next v3 1/1] ice: Change assigning
 method of the CPU affinity masks

Dear Pawel,


Thank you for your patch.

Am 17.02.23 um 23:03 schrieb Pawel Chmielewski:
> With the introduction of sched_numa_hop_mask() and for_each_numa_hop_mask(),
> the affinity masks for queue vectors can be conveniently set by preferring the
> CPUs that are closest to the NUMA node of the parent PCI device.

Please reflow the commit message for 75 characters per line.

Additionally, you could be more specific in the commit message summary:

ice: Prefer CPUs closest to NUMA node of parent PCI

In the commit message, please elaborate, how you tested and benchmarked 
your change.

> Signed-off-by: Pawel Chmielewski <pawel.chmielewski@...el.com>
> ---
> Changes since v2:
>   * Pointers for cpumasks point to const struct cpumask
>   * Removed unnecessary label
>   * Removed redundant blank lines
> 
> Changes since v1:
>   * Removed obsolete comment
>   * Inverted condition for loop escape
>   * Incrementing v_idx only in case of available cpu
> ---
>   drivers/net/ethernet/intel/ice/ice_base.c | 21 ++++++++++++++++-----
>   1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
> index 1911d644dfa8..30dc1c3c290f 100644
> --- a/drivers/net/ethernet/intel/ice/ice_base.c
> +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> @@ -121,9 +121,6 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
>   
>   	if (vsi->type == ICE_VSI_VF)
>   		goto out;
> -	/* only set affinity_mask if the CPU is online */
> -	if (cpu_online(v_idx))
> -		cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
>   
>   	/* This will not be called in the driver load path because the netdev
>   	 * will not be created yet. All other cases with register the NAPI
> @@ -662,8 +659,10 @@ int ice_vsi_wait_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx)
>    */
>   int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
>   {
> +	const struct cpumask *aff_mask, *last_aff_mask = cpu_none_mask;
>   	struct device *dev = ice_pf_to_dev(vsi->back);
> -	u16 v_idx;
> +	int numa_node = dev->numa_node;
> +	u16 v_idx, cpu = 0;

Could you use `unsigned int` for `cpu`?

     include/linux/cpumask.h:static inline bool cpu_online(unsigned int cpu)

>   	int err;
>   
>   	if (vsi->q_vectors[0]) {
> @@ -677,7 +676,19 @@ int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
>   			goto err_out;
>   	}
>   
> -	return 0;
> +	v_idx = 0;
> +	for_each_numa_hop_mask(aff_mask, numa_node) {
> +		for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) {
> +			if (v_idx >= vsi->num_q_vectors)
> +				return 0;
> +
> +			if (cpu_online(cpu)) {
> +				cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask);
> +				v_idx++;
> +			}
> +		}
> +		last_aff_mask = aff_mask;
> +	}
>   
>   err_out:
>   	while (v_idx--)


Kind regards,

Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ