lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 09 Oct 2009 16:48:56 -0700
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	Chris Friesen <cfriesen@...tel.com>
CC:	e1000-list <e1000-devel@...ts.sourceforge.net>,
	Linux Network Development list <netdev@...r.kernel.org>,
	"Allan, Bruce W" <bruce.w.allan@...el.com>,
	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	"Ronciak, John" <john.ronciak@...el.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"gospo@...hat.com" <gospo@...hat.com>
Subject: Re: [E1000-devel] behaviour question for igb on nehalem box

Alexander Duyck wrote:
> Chris Friesen wrote:
>> On 10/09/2009 02:22 PM, Brandeburg, Jesse wrote:
>>> On Fri, 9 Oct 2009, Chris Friesen wrote:
>>>> I've got some general questions around the expected behaviour of the
>>>> 82576 igb net device.  (On a dual quad-core Nehalem box, if it matters.)
>>> the hardware you have only supports 8 
>>> queues (rx and tx) and the driver is configured to only set up 4 max.
>> The datasheet for the 82576 says 16 tx queues and 16 rx queues.  Is that
>> a typo or do we have the economy version?
> 
> Actually the limitation is due to the fact that there are only 10 
> interrupts available.  On kernels that support TX multi-queue the number 
> of queues would be 4 TX and 4 RX, which would consume 8 interrupts 
> leaving 1 for the link status change and one unused.
> 
> However on the kernel you are using I don't believe multi-queue NAPI is 
> enabled so you shouldn't have multiple RX queues either.  On a 2.6.18 
> kernel you should have only 1 RX and 1 TX queue unless you are using the 
> driver provided on e1000.sourceforge.net which uses fake netdevs to 
> support multi-queue NAPI.  I believe this may be a bug that was 
> introduced when SR-IOV support was back-ported from the 2.6.30 kernel.

Actually after looking closer at the Redhat source it looks like they 
have done the fake netdev workaround in their own code so I guess igb 
driver in the RHEL kernel does support multiple RX queues.

>>>> My second question is around how the rx queues are mapped to interrupts.
>>>>  According to /proc/interrupts there appears to be a 1:1 mapping between
>>>> queues and interrupts.  However, I've set up at test with a given amount
>>>> of traffic coming in to the device (from 4 different IP addresses and 4
>>>> ports).  Under this scenario, "ethtool -S" shows the number of packets
>>>> increasing for only rx queue 0, but I see the interrupt count going up
>>>> for two interrupts.
>>> one transmit interrupt and one receive interrupt?
>> No, two rx interrupts.  (Can't remember if the tx interrupt was going up
>> as well or no...was only looking at rx.)
> 
> This may be due to the bug I mentioned above.  Multiple RX queues 
> shouldn't be present on the 2.6.18 kernel as I do not believe 
> multi-queue NAPI has been back-ported and it could have negative effects.

The odds of any 2 flows overlapping when you are only using 4 flows is 
pretty high, especially if the addresses/ports are close in range.  You 
typically need something on the order of about 16 flows over a wide 
range of port numbers in order to get a good distribution.

Thanks,

Alex



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ