[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <200903290447.58338.denys@visp.net.lb>
Date: Sun, 29 Mar 2009 03:47:58 +0200
From: Denys Fedoryschenko <denys@...p.net.lb>
To: netdev@...r.kernel.org
Subject: bnx2, 2.6.29, smp_affinity strangeness
Hi
While running bnx2 tried to assign affinity to network card and ... failed.
Here it is:
globax2 ~ # cat /proc/interrupts |grep
eth1;cat /proc/irq/97/smp_affinity ;sleep 4;cat /proc/interrupts |grep eth1
97: 5637762 5637764 5637745 5637674 5637815 5637842
5637795 5637839 PCI-MSI-edge eth1
10
97: 5641756 5641754 5641739 5641669 5641809 5641836
5641789 5641833 PCI-MSI-edge eth1
As you see, smp_affinity set to CPU4 (5th CPU if count from 1), but interrupts
is still happening on all CPU's.
Meanwhile i found more strange things:
I did ifconfig to eth0 up
in dmesg got
[ 4855.804365] bnx2 0000:05:00.0: irq 98 for MSI/MSI-X
[ 4855.923017] bnx2: eth0: using MSI
default_smp_affinity was 0xf (first 4 CPU)
and got:
98: 66 73 67 67 0 0
0 0 PCI-MSI-edge eth0
but if i try to change smp_affinity while it is running - it wont change
anything.
If i bring interface down, i will see strange entry
98: 90 90 86 88 0 0
0 0 none-<NULL>
Now i have entry in /proc/irq/98/ , so while it is down i change affinity to
10. Voila, i bring it up, and affinity working correct.
98: 90 90 86 88 51 0
0 0 PCI-MSI-edge eth0
98: 90 90 86 88 129 0
0 0 PCI-MSI-edge eth0
Is it correct that i am able to change IRQ affinity only when interface down?
On other cards i can change it "on fly".
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists