lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7e63f56c0612262309p5337a753q3b1748910fce70b5@mail.gmail.com>
Date:	Wed, 27 Dec 2006 09:09:34 +0200
From:	"Robert Iakobashvili" <coroberti@...il.com>
To:	hadi@...erus.ca
Cc:	"Arjan van de Ven" <arjan@...radead.org>, netdev@...r.kernel.org
Subject: Re: Network card IRQ balancing with Intel 5000 series chipsets

On 12/27/06, jamal <hadi@...erus.ca> wrote:
> On Wed, 2006-27-12 at 01:28 +0100, Arjan van de Ven wrote:
>
> > current irqbalance accounts for napi by using the number of packets as
> > indicator for load, not the number of interrupts. (for network
> > interrupts obviously)
> >
>
> Sounds a lot more promising.
> Although still insufficient in certain cases. All flows are not equal; as an
> example, an IPSEC flow with 1000 packets bound to one CPU  will likely
> utilize more cycles than 5000 packets that are being plain forwarded on
> another CPU.

I do agree with Jamal, that there is a problem here.

My scenario is treatment of RTP packets in kernel space with a single network
card (both Rx and Tx). The default of the Intel 5000 series chipset is
affinity of each
network card to a certain CPU. Currently, neither with irqbalance nor
with kernel
irq-balancing (MSI and io-apic attempted) I do not find a way to
balance that irq.

This is a good design in general to keep a static CPU-affinity for
network card interrupt.
However, what I have is that CPU0 is idle less than 10%, whereas 3
other core are
(2 dual-core CPUs, Intel) doing about nothing.
There is a real problem of CPU scaling with such design. Some day we
can wish to
add a 10Gbps network card and 16 cores/CPUs, but it will not be
helpful to scale.

Probably, some cards have separated Rx and Tx interrupts. Still,
scaling is an issue.

I will look into PCI-E option, thanks Jamal.


-- 
Sincerely,
Robert Iakobashvili,
coroberti %x40 gmail %x2e com
...................................................................
Navigare necesse est, vivere non est necesse
...................................................................
http://sourceforge.net/projects/curl-loader
A powerful open-source HTTP/S, FTP/S traffic
generating, loading and testing tool.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ