lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Sep 2022 14:42:39 -0500
From:   Nick Child <nnac123@...ux.ibm.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     netdev@...r.kernel.org, bjking1@...ux.ibm.com, haren@...ux.ibm.com,
        ricklind@...ibm.com, mmc@...ux.ibm.com
Subject: Re: [PATCH net-next 3/3] ibmveth: Ethtool set queue support


On 9/26/22 13:44, Jakub Kicinski wrote:
> On Wed, 21 Sep 2022 16:50:56 -0500 Nick Child wrote:
>> diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
>> index 7abd67c2336e..2c5ded4f3b67 100644
>> --- a/drivers/net/ethernet/ibm/ibmveth.c
>> +++ b/drivers/net/ethernet/ibm/ibmveth.c

>> +static void ibmveth_get_channels(struct net_device *netdev,
>> +				 struct ethtool_channels *channels)
>> +{
>> +	channels->max_tx = ibmveth_real_max_tx_queues();
>> +	channels->tx_count = netdev->real_num_tx_queues;
>> +
>> +	channels->max_rx = netdev->real_num_rx_queues;
>> +	channels->rx_count = netdev->real_num_rx_queues;
> 
> Which is 1, right?

Correct, there will only be one rx queue. I chose doing this over using
hard-coded values to ensure that the values presented to the user are
actually being implemented in the networking layers.

>> +static int ibmveth_set_channels(struct net_device *netdev,
>> +				struct ethtool_channels *channels)

>> +	/* We have IBMVETH_MAX_QUEUES netdev_queue's allocated
>> +	 * but we may need to alloc/free the ltb's.
>> +	 */
>> +	netif_tx_stop_all_queues(netdev);
> 
> What if the device is not UP?

 From my understanding this will just set the __QUEUE_STATE_DRV_XOFF bit
of all of the queues. I don't think this will cause any issues if the
device is DOWN. Please let me know if you are worried about anything
in particular here.

Just as a side note, ibmveth never sets carrier state to UP since it
cannot determine the link status of the physical device being
virtualized. The state is instead left as UNKNOWN when not DOWN.


The other comments from Jakub's review are left out because I fully
agree with them and will apply them to a v2 soon (as well as adding
anything else that comes up before then).

Thank you very much for the review Jakub :)

--Nick Child

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ