lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Jan 2012 16:33:42 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Jean-Michel Hautbois <jhautbois@...il.com>
Cc:	netdev <netdev@...r.kernel.org>
Subject: Re: MSI-X and interrupt affinity with Emulex be2net

Le jeudi 19 janvier 2012 à 15:33 +0100, Jean-Michel Hautbois a écrit :
> Hi all,
> 
> I am using a be2net ethernet interface, and am on the master git
> repository, so the latest kernel available.
> I have MSI-X enabled of course, and as far as I can tell, all
> interrupts are bound to CPU0.
> I have several process and threads which use quite all the ethernet
> interfaces I have, and I reach 7180 IRQs/sec on CPU0.
> 
> I have several questions regarding this :
> - I thought MSI-X would help in balancing IRQs on different CPUs

    with some help from admin (or irqbalance ?), yes...

> - Is this a good idea anyway to specify which IRQ is bound to which
> CPU manually, if I set the process using the IRQ to the same CPU ?

There is no general answer to this question unfortunately.

It all depends on the workload, the number of flows, balance between
transmits and receives ...

Also be2net has one single interrupt for the tx-completion side.

For example, IP defragmentation uses a rwlock, so splitting your trafic
to several cpus might increase false sharing and contention.

About the "all interrupts on CPU0", this is what happens with default
smp_affinity settings.

Here I have a be2net and can change IRQ affinities with no problem

# grep eth3 /proc/interrupts 
  76:    1418189          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-tx
  77:      24831          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq0
  78:       5147          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq1
  79:          7          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq2
  80:          2          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq3
  81:        118          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq4
# cat /proc/irq/77/smp_affinity 
ffff
# echo 2 >/proc/irq/77/smp_affinity
# echo 4 >/proc/irq/78/smp_affinity
# echo 8 >/proc/irq/79/smp_affinity
# echo 10 >/proc/irq/80/smp_affinity
# echo 20 >/proc/irq/81/smp_affinity
# grep eth3 /proc/interrupts 
  76:    1426698          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-tx
  77:      24832          7          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq0
  78:       5318          0       3489          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq1
  79:          7          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq2
  80:          3          0          0          0          0          0
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq3
  81:        167          0          0          0          0        229
0          0          0          0          0          0          0
0          0          0   PCI-MSI-edge      eth3-rxq4


> - How can I measure performance impact (using perf, probably) of no
> balancing (ie, all on CPU0) versus balancing like explained above ?

Yes, perf is a good tool.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists