lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 25 Jun 2012 14:03:18 -0700
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
CC:	eilong@...adcom.com, Yuval Mintz <yuvalmin@...adcom.com>,
	davem@...emloft.net, netdev@...r.kernel.org,
	Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
	John Fastabend <john.r.fastabend@...el.com>
Subject: Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default
 RSS queues

On 06/25/2012 11:44 AM, Ben Hutchings wrote:
> On Mon, 2012-06-25 at 11:38 -0700, Alexander Duyck wrote:
>> On 06/25/2012 10:53 AM, Eilon Greenstein wrote:
>>> On Mon, 2012-06-25 at 08:44 -0700, Alexander Duyck wrote:
>>>> This doesn't limit queues, only interrupt vectors.  As I told you before
>>>> you should look at the ixgbe_set_rss_queues function if you actually
>>>> want to limit the number of RSS queues.
>>> How about this:
>>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> index af1a531..23a8609 100644
>>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> @@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
>>>  	bool ret = false;
>>>  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
>>>  
>>> +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
>>> +
>>>  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
>>>  		f->mask = 0xF;
>>>  		adapter->num_rx_queues = f->indices;
>>> @@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
>>>  	bool ret = false;
>>>  	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
>>>  
>>> -	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
>>> +	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
>>> +				f_fdir->indices);
>>> +
>>>  	f_fdir->mask = 0;
>>>  
>>>  	/*
>>> @@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
>>>  	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
>>>  		return false;
>>>  
>>> -	f->indices = min_t(int, num_online_cpus(), f->indices);
>>> -
>>> +	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
>>>  	adapter->num_rx_queues = 1;
>>>  	adapter->num_tx_queues = 1;
>>>
>> This makes much more sense, but still needs a few minor changes.  The
>> first change I would recommend is to not alter ixgbe_set_fdir_queues
>> since that controls flow director queues, not RSS queues.  The second
>> would be to update ixgbe_set_dcb_queues since that does RSS per DCB
>> traffic class and will also need to be updated.
> There is a difference between the stated aim (reduce memory allocated
> for RX buffers) and this implementation (limit number of RSS queues
> only).  If we agree on that aim then we should be limiting the total
> number of RX queues.
>
> Ben
The problem is there are certain features that require a certain number
of Tx/Rx queues.  In addition, certain features will behave differently
from RSS in terms of how well they scale based on the number of queues.

For example if we enable DCB we require at least one queue per traffic
class.  In the case of ATR we should have 1 queue per CPU since the Tx
queue selection is based on the number of CPUs.  If we don't have that
1:1 mapping we should be disabling ATR since the feature will not
function correctly in that case.

Thanks,

Alex


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ