lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1167232007.3281.3931.camel@laptopd505.fenrus.org>
Date:	Wed, 27 Dec 2006 16:06:47 +0100
From:	Arjan van de Ven <arjan@...radead.org>
To:	hadi@...erus.ca
Cc:	Robert Iakobashvili <coroberti@...il.com>, netdev@...r.kernel.org
Subject: Re: Network card IRQ balancing with Intel 5000 series chipsets

On Wed, 2006-12-27 at 09:44 -0500, jamal wrote:
> On Wed, 2006-27-12 at 14:08 +0100, Arjan van de Ven wrote:
> 
> > sure; however the kernel doesn't provide more accurate information
> > currently (and I doubt it could even, it's not so easy to figure out
> > which interface triggered the softirq if 2 interfaces share the cpu, and
> > then, how much work came from which etc).
> > 
> 
> If you sample CPU use and in between two samples you are able to know
> which nic is tied to which CPU, how much cycles such cpu consumed in
> user vs kernel, and how many packets were seen on such nic; then you
> should have the info necessary to make a decision, no?

Note that getting softirq time itself isn't a problem, that is available
actually. (it's not very accurate but that's another kettle of fish
entirely)

But... No that isn't better than packet counts.
Cases where it simply breaks 
1) you have more nics than cpus, so you HAVE to have sharing 
2) Other loads going on than just pure networking (storage but also
timers and .. and ..)

And neither is even remotely artificial. 

> Yes, I know it is
> a handwave on my part and it is complex but by the same token, I would
> suspect each kind of IO derived work (which results in interupts) will
> have more inputs that could help you make a proper decision than a mere
> glance of the interupts. I understand for example the SCSI subsystem
> these days behaves very much like NAPI.

the difference between scsi and networking is that the work scsi does
per "sector" is orders and orders of magnitude less than what networking
does. SCSI does it's work mostly per "transfer" not per sector, and if
you're busy you tend to get larger transfers as well (megabytes is not
special). SCSI also doesn't look at the payload at all, unlike
networking (where there are those pesky headers every 1500 bytes or less
that the kernel needs to look at :)


> It is certainly much more promising now than before. Most people will
> probably have symettrical type of apps, so it should work for them.
> For someone like myself i will still not use it because i typically dont
> have symettrical loads.

unless you have more nics than you have cpus, irqbalance will do the
right thing anyway (it'll tend to not share or move networking
interrupts). And once you have more nics than you have cpus.... see
above.

-- 
if you want to mail me at work (you don't), use arjan (at) linux.intel.com
Test the interaction between Linux and your BIOS via http://www.linuxfirmwarekit.org

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ