lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Feb 2014 11:49:32 -0800
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	Prarit Bhargava <prarit@...hat.com>
CC:	netdev@...r.kernel.org, Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
	Jesse Brandeburg <jesse.brandeburg@...el.com>,
	Bruce Allan <bruce.w.allan@...el.com>,
	Carolyn Wyborny <carolyn.wyborny@...el.com>,
	Don Skidmore <donald.c.skidmore@...el.com>,
	Greg Rose <gregory.v.rose@...el.com>,
	John Ronciak <john.ronciak@...el.com>,
	Mitch Williams <mitch.a.williams@...el.com>,
	"David S. Miller" <davem@...emloft.net>, nhorman@...hat.com,
	agospoda@...hat.com, e1000-devel@...ts.sourceforge.net
Subject: Re: [PATCH 1/2] ixgbe, make interrupt allocations NUMA aware

On 02/24/2014 11:39 AM, Prarit Bhargava wrote:
> 
> 
> On 02/24/2014 02:26 PM, Alexander Duyck wrote:
>> On 02/24/2014 10:51 AM, Prarit Bhargava wrote:
>>> The ixgbe driver creates one queue/cpu on the system in order to spread
>>> work out on all cpus rather than restricting work to a single cpu.  This
>>> model, while efficient, does not take into account the NUMA configuration
>>> of the system.
>>>
>>> This patch introduces ixgbe_num_cpus() which returns
>>> the number of online cpus if the adapter's PCI device has no NUMA
>>> restrictions, and the number of cpus in the node if the PCI device is
>>> allocated to a specific node.
>>>
>>> Signed-off-by: Prarit Bhargava <prarit@...hat.com>
>>> Cc: Jeff Kirsher <jeffrey.t.kirsher@...el.com>
>>> Cc: Jesse Brandeburg <jesse.brandeburg@...el.com>
>>> Cc: Bruce Allan <bruce.w.allan@...el.com>
>>> Cc: Carolyn Wyborny <carolyn.wyborny@...el.com>
>>> Cc: Don Skidmore <donald.c.skidmore@...el.com>
>>> Cc: Greg Rose <gregory.v.rose@...el.com>
>>> Cc: Alex Duyck <alexander.h.duyck@...el.com>
>>> Cc: John Ronciak <john.ronciak@...el.com>
>>> Cc: Mitch Williams <mitch.a.williams@...el.com>
>>> Cc: "David S. Miller" <davem@...emloft.net>
>>> Cc: nhorman@...hat.com
>>> Cc: agospoda@...hat.com
>>> Cc: e1000-devel@...ts.sourceforge.net
>>> ---
>>>  drivers/net/ethernet/intel/ixgbe/ixgbe.h       |    2 ++
>>>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c   |   28 +++++++++++++++++++++---
>>>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c  |    6 ++---
>>>  drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |    5 +++--
>>>  4 files changed, 33 insertions(+), 8 deletions(-)
>>>
>>
>> [...]
>>
>>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>>> index 18076c4..b68a6e9 100644
>>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>>> @@ -4953,13 +4953,13 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
>>>  	hw->subsystem_device_id = pdev->subsystem_device;
>>>  
>>>  	/* Set common capability flags and settings */
>>> -	rss = min_t(int, IXGBE_MAX_RSS_INDICES, num_online_cpus());
>>> +	rss = min_t(int, IXGBE_MAX_RSS_INDICES, ixgbe_num_cpus(adapter));
>>>  	adapter->ring_feature[RING_F_RSS].limit = rss;
>>>  	adapter->flags2 |= IXGBE_FLAG2_RSC_CAPABLE;
>>>  	adapter->flags2 |= IXGBE_FLAG2_RSC_ENABLED;
>>>  	adapter->max_q_vectors = MAX_Q_VECTORS_82599;
>>>  	adapter->atr_sample_rate = 20;
>>> -	fdir = min_t(int, IXGBE_MAX_FDIR_INDICES, num_online_cpus());
>>> +	fdir = min_t(int, IXGBE_MAX_FDIR_INDICES, ixgbe_num_cpus(adapter));
>>>  	adapter->ring_feature[RING_F_FDIR].limit = fdir;
>>>  	adapter->fdir_pballoc = IXGBE_FDIR_PBALLOC_64K;
>>>  #ifdef CONFIG_IXGBE_DCA
>>
>> This is the one bit I object to in this patch.  The flow director queue
>> count should be equal to the number of online CPUs, or at least as close
>> to it as the hardware can get.  Otherwise ATR is completely useless.
> 
> I'm reading up on ATR now and I see your point completely.  I will remove this
> chunk in V2.  OOC, however, what about my concern with ATR & the location of the
> PCI device (on a different root bridge)?  Isn't that a concern with ATR or am I
> missing something with the overall scheme of ATR?
> 
> P.
> 

The advantage to ATR is that it knows where the application requesting
the packet data resides.  The applications on remote nodes still need
access to the device and the only means of getting to it is through
memory.  If the root complex is on one node and the memory/CPU is on
another it is still cheaper to have the device push the descriptor and
packet to the memory/CPU then to have the CPU have to fetch it from our
local nodes memory and then copy it into the application memory.

RSS which is the fallback if we don't have ATR isn't application aware
so in the case of RSS we probably want to just process all of the
requests locally and hope for the best since we don't know what node the
data will eventually end up on.

Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ