lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220926114448.10434fba@kernel.org>
Date:   Mon, 26 Sep 2022 11:44:48 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Nick Child <nnac123@...ux.ibm.com>
Cc:     netdev@...r.kernel.org, bjking1@...ux.ibm.com, haren@...ux.ibm.com,
        ricklind@...ibm.com, mmc@...ux.ibm.com
Subject: Re: [PATCH net-next 3/3] ibmveth: Ethtool set queue support

On Wed, 21 Sep 2022 16:50:56 -0500 Nick Child wrote:
> diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
> index 7abd67c2336e..2c5ded4f3b67 100644
> --- a/drivers/net/ethernet/ibm/ibmveth.c
> +++ b/drivers/net/ethernet/ibm/ibmveth.c
> @@ -141,6 +141,13 @@ static inline int ibmveth_rxq_csum_good(struct ibmveth_adapter *adapter)
>  	return ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_CSUM_GOOD;
>  }
>  
> +static unsigned int ibmveth_real_max_tx_queues(void)
> +{
> +	unsigned int n_cpu =  num_online_cpus();

nit: double space after =

> +	return n_cpu > IBMVETH_MAX_QUEUES ? IBMVETH_MAX_QUEUES : n_cpu;

min()

> +static void ibmveth_get_channels(struct net_device *netdev,
> +				 struct ethtool_channels *channels)
> +{
> +	channels->max_tx = ibmveth_real_max_tx_queues();
> +	channels->tx_count = netdev->real_num_tx_queues;
> +
> +	channels->max_rx = netdev->real_num_rx_queues;
> +	channels->rx_count = netdev->real_num_rx_queues;

Which is 1, right?

> +	channels->max_other = 0;
> +	channels->other_count = 0;
> +	channels->max_combined = 0;
> +	channels->combined_count = 0;

AFAIR the core zeros the input, no need to explicitly set to 0 here.

> +static int ibmveth_set_channels(struct net_device *netdev,
> +				struct ethtool_channels *channels)
> +{
> +	struct ibmveth_adapter *adapter = netdev_priv(netdev);
> +	int rc, rc2, i;
> +	unsigned int fallback_num, goal;
> +
> +	/* Higher levels will catch basic input errors */
> +	if (channels->tx_count > ibmveth_real_max_tx_queues())
> +		return -EINVAL;

AFAIR core validates this.

> +	if (channels->tx_count == netdev->real_num_tx_queues)
> +		return 0;

And this.

> +	/* We have IBMVETH_MAX_QUEUES netdev_queue's allocated
> +	 * but we may need to alloc/free the ltb's.
> +	 */
> +	netif_tx_stop_all_queues(netdev);

What if the device is not UP?

> +	fallback_num = netdev->real_num_tx_queues;

fallback is not great naming. Why not old or prev?

> +	goal = channels->tx_count;
> +
> +setup_tx_queues:
> +	/* Allocate any queue that we need */
> +	for (i = 0; i < goal; i++) {
> +		if (adapter->tx_ltb_ptr[i])
> +			continue;
> +
> +		rc = ibmveth_allocate_tx_ltb(adapter, i);
> +		if (!rc)
> +			continue;

I don't understand why you start from zero here. The first real_num_tx
should already be allocated. If you assume this it removes the need for
the error handling below. Either the number of queues is increased and
on failure we go back to the old one, or it's decreased which can't
fail.

> +		if (goal == fallback_num)
> +			goto full_restart;
> +
> +		netdev_err(netdev, "Failed to allocate more tx queues, returning to %d queues\n",
> +			   fallback_num);
> +		goal = fallback_num;
> +		goto setup_tx_queues;
> +	}
> +	/* Free any that are no longer needed */
> +	for (; i < fallback_num; i++) {
> +		if (adapter->tx_ltb_ptr[i])
> +			ibmveth_free_tx_ltb(adapter, i);
> +	}
> +
> +	rc = netif_set_real_num_tx_queues(netdev, goal);

You can assume this doesn't fail on decrease.

> +	if (rc) {
> +		if (goal == fallback_num)
> +			goto full_restart;
> +		netdev_err(netdev, "Failed to set real tx queues, returning to %d queues\n",
> +			   fallback_num);
> +		goal = fallback_num;
> +		goto setup_tx_queues;
> +	}

>  #define IBMVETH_MAX_BUF_SIZE (1024 * 128)
>  #define IBMVETH_MAX_TX_BUF_SIZE (1024 * 64)
> -#define IBMVETH_MAX_QUEUES 8
> +#define IBMVETH_MAX_QUEUES 16

Why does the same series introduce the max as 8 and then bump to 16?
Why can't patch 1 just use 16 from the start?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ