[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20091119131650.GA14830@spaans64.fox.local>
Date: Thu, 19 Nov 2009 14:16:50 +0100
From: Jasper Spaans <spaans@...-it.com>
To: <linux-kernel@...r.kernel.org>
Subject: dell r610, 16-cores, interrupt issue
Hi lists,
I'm working on setting up a Dell R610 machine with two quadcore Xeons an
hyperthreading, giving a total of 16 CPUs.
When using the current head, with CONFIG_NR_CPUS=16, this gives me
unexpected irq handling:
spaans@...0 ~ $ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8 CPU9 CPU10 CPU11 CPU12 CPU13 CPU14 CPU15
0: 230366 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge timer
1: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge i8042
8: 114 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge rtc0
9: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi
12: 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge i8042
16: 2220 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-fasteoi ioc0
...
61: 1456518 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth4
...
so, all external interrupts seem to end up being handled by CPU0.
When doing the same with CONFIG_NR_CPUS=8, each interrupt source is given
its own CPU to be used to handle those interrupts, but there's only one CPU
for each IRQ.
Furthermore, nothing seems to be weird in /proc/irq/*/smp_affinity, that is:
all bits are set there. If I disable the first CPU for an irq, the next CPU
will start receiving interrupts:
r610 61 # echo fffe >smp_affinity
r610 61 # cat /proc/interrupts
...
61: 4392347 35086 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth4
...
Is this supposed to happen, is it a bug, or did I make a configuration
booboo somewhere?
The configuration (grep -v '^#'-d) is attached, as is the dmesg.
Thanks,
Jasper
--
Fox-IT Experts in IT Security!
T: +31 (0) 15 284 79 99
KvK Haaglanden 27301624
View attachment "r610-conf-head.txt" of type "text/plain" (34066 bytes)
View attachment "dmesg1.txt" of type "text/plain" (85912 bytes)
Powered by blists - more mailing lists