lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <4ACF8466.5030309@nortel.com>
Date:	Fri, 09 Oct 2009 12:43:50 -0600
From:	"Chris Friesen" <cfriesen@...tel.com>
To:	e1000-list <e1000-devel@...ts.sourceforge.net>,
	Linux Network Development list <netdev@...r.kernel.org>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	"Allan, Bruce W" <bruce.w.allan@...el.com>,
	peter.p.waskiewicz.jr@...el.com,
	"Ronciak, John" <john.ronciak@...el.com>
Subject: behaviour question for igb on nehalem box


Hi all,

I've got some general questions around the expected behaviour of the
82576 igb net device.  (On a dual quad-core Nehalem box, if it matters.)

As a caveat, the box is running Centos 5.3 with their 2.6.18 kernel.
It's using the 1.3.16-k2 igb driver though, which looks to be the one
from mainline linux.

The igb driver is being loaded with no parameters specified.  At driver
init time, it's selecting 1 tx queue and 4 rx queues per device.

My first question is whether the number of queues makes sense.  I
couldn't figure out how this would happen since the rules for selecting
the number of queues seems to be the same for rx and tx.  Also, it's not
clear to me why it's limiting itself to 4 rx queues when I have 8
physical cores (and 16 virtual ones with hyperthreading enabled).

My second question is around how the rx queues are mapped to interrupts.
 According to /proc/interrupts there appears to be a 1:1 mapping between
queues and interrupts.  However, I've set up at test with a given amount
of traffic coming in to the device (from 4 different IP addresses and 4
ports).  Under this scenario, "ethtool -S" shows the number of packets
increasing for only rx queue 0, but I see the interrupt count going up
for two interrupts.

My final question is around smp affinity for the rx and tx queue
interrupts.  Do I need to affine the interrupt for each rx queue to a
single core to guarantee proper packet ordering, or can they be handled
on arbitrary cores?  Should the tx queue be affined to a particular core
or left to be handled by all cores?

Thanks,

Chris

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ