lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CAC8D11.2060604@intel.com>
Date:	Wed, 06 Oct 2010 07:52:01 -0700
From:	John Fastabend <john.r.fastabend@...el.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
CC:	Eric Dumazet <eric.dumazet@...il.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"therbert@...gle.com" <therbert@...gle.com>
Subject: Re: [net-next-2.6 PATCH] net: netif_set_real_num_rx_queues may cap
 num_rx_queues at init time

On 10/5/2010 10:45 AM, John Fastabend wrote:
> On 10/5/2010 9:34 AM, Ben Hutchings wrote:
>> On Tue, 2010-10-05 at 09:08 -0700, John Fastabend wrote:
>>> On 10/4/2010 10:35 PM, Eric Dumazet wrote:
>>>> Le lundi 04 octobre 2010 à 15:00 -0700, John Fastabend a écrit :
>>>>> The logic for netif_set_real_num_rx_queues is the following,
>>>>>
>>>>> netif_set_real_num_rx_queues(dev, rxq)
>>>>> {
>>>>> 	...
>>>>> 	if (dev->reg_state == NETREG_REGISTERED) {
>>>>> 		...
>>>>> 	} else {
>>>>> 		dev->num_rx_queues = rxq;
>>>>> 	}
>>>>>
>>>>> 	dev->real_num_rx_queues = rxq;
>>>>> 	return 0;
>>>>> }
>>>>>
>>>>> Some drivers init path looks like the following,
>>>>>
>>>>> alloc_etherdev_mq(priv_sz, max_num_queues_ever);
>>>>> ...
>>>>> netif_set_real_num_rx_queues(dev, queues_to_use_now);
>>>>> ...
>>>>> register_netdev(dev);
>>>>> ...
>>>>>
>>>>> Because netif_set_real_num_rx_queues sets num_rx_queues if the
>>>>> reg state is not NETREG_REGISTERED we end up with the incorrect
>>>>> max number of rx queues. This patch proposes to remove the else
>>>>> clause above so this does not occur.  Also just reading the
>>>>> function set_real_num it seems a bit unexpected that num_rx_queues
>>>>> gets set.
>>>>>
>>>>
>>>> You dont tell why its "incorrect".
>>>>
>>>
>>> OK that is a poor description.
>>>
>>>> Why should we keep num_rx_queues > real_num_rx_queues ?
>>>>
>>>
>>> If we do not ever need them then we should not keep them I agree.
>>> But having netif_set_real_num_rx_queues set something other then
>>> 'real_num_rx_queues' does not seem right to me at least. Also
>>> netif_set_real_num_tx_queues and netif_set_real_num_rx_queues have
>>> different behavior. It would be nice if this weren't the case but
>>> they allocate queues in two places.
>> [...]
>>
>> I only did this to satisfy Eric's desire to reduce memory usage.
>> However, I believe that there are currently no drivers that dynamically
>> increase numbers of RX or TX queues.  Until there are, there is not much
>> point in removing this assignment to num_rx_queues.
>>
>> Ben.
>>
> 
> ixgbe increases the real_num_[rx|tx]_queues when FCoE or DCB is enabled.
> Also many of the drivers could increase the number of queues if they were
> given more interrupt vectors at some point.


If I update the handful drivers that use netif_set_real_num_rx_queues()
before the netdevice is registered to explicitly set num_rx_queues this
would address Eric's concerns and fix drivers that really only want to set
real_num_rx_queue.

Any thoughts?

-- John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ