[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL8zT=gm_1R2NuC0Tfii+Vpy1eu2tUXgVBa8g+i=9Ojs8T_p+A@mail.gmail.com>
Date: Thu, 19 Jan 2012 16:52:09 +0100
From: Jean-Michel Hautbois <jhautbois@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev <netdev@...r.kernel.org>,
Jean-Michel Hautbois <jhautbois@...il.com>
Subject: Re: MSI-X and interrupt affinity with Emulex be2net
2012/1/19 Eric Dumazet <eric.dumazet@...il.com>:
> Le jeudi 19 janvier 2012 à 15:33 +0100, Jean-Michel Hautbois a écrit :
>> Hi all,
>>
>> I am using a be2net ethernet interface, and am on the master git
>> repository, so the latest kernel available.
>> I have MSI-X enabled of course, and as far as I can tell, all
>> interrupts are bound to CPU0.
>> I have several process and threads which use quite all the ethernet
>> interfaces I have, and I reach 7180 IRQs/sec on CPU0.
>>
>> I have several questions regarding this :
>> - I thought MSI-X would help in balancing IRQs on different CPUs
>
> with some help from admin (or irqbalance ?), yes...
Irqbalance is a nice tool, and I thought MSI-X would be automatic on
this particular stuff...
>> - Is this a good idea anyway to specify which IRQ is bound to which
>> CPU manually, if I set the process using the IRQ to the same CPU ?
>
> There is no general answer to this question unfortunately.
>
> It all depends on the workload, the number of flows, balance between
> transmits and receives ...
>
> Also be2net has one single interrupt for the tx-completion side.
>
> For example, IP defragmentation uses a rwlock, so splitting your trafic
> to several cpus might increase false sharing and contention.
>
> About the "all interrupts on CPU0", this is what happens with default
> smp_affinity settings.
>
> Here I have a be2net and can change IRQ affinities with no problem
>
> # grep eth3 /proc/interrupts
> 76: 1418189 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-tx
> 77: 24831 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq0
> 78: 5147 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq1
> 79: 7 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq2
> 80: 2 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq3
> 81: 118 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq4
> # cat /proc/irq/77/smp_affinity
> ffff
> # echo 2 >/proc/irq/77/smp_affinity
> # echo 4 >/proc/irq/78/smp_affinity
> # echo 8 >/proc/irq/79/smp_affinity
> # echo 10 >/proc/irq/80/smp_affinity
> # echo 20 >/proc/irq/81/smp_affinity
> # grep eth3 /proc/interrupts
> 76: 1426698 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-tx
> 77: 24832 7 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq0
> 78: 5318 0 3489 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq1
> 79: 7 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq2
> 80: 3 0 0 0 0 0
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq3
> 81: 167 0 0 0 0 229
> 0 0 0 0 0 0 0
> 0 0 0 PCI-MSI-edge eth3-rxq4
>
OK, I understand know, I have the same behaviour.
>> - How can I measure performance impact (using perf, probably) of no
>> balancing (ie, all on CPU0) versus balancing like explained above ?
>
> Yes, perf is a good tool.
How would you launch it ? Using perf record and analysing the results
which can be very long, or using a trick with a good command line
which would help know quite instantly if IRQs are well balanced :) ?
JM
Powered by blists - more mailing lists