[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1809031719460.1383@nanos.tec.linutronix.de>
Date: Mon, 3 Sep 2018 18:28:06 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Kashyap Desai <kashyap.desai@...adcom.com>
cc: Ming Lei <tom.leiming@...il.com>,
Sumit Saxena <sumit.saxena@...adcom.com>,
Ming Lei <ming.lei@...hat.com>, Christoph Hellwig <hch@....de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Shivasharan Srikanteshwara
<shivasharan.srikanteshwara@...adcom.com>,
linux-block <linux-block@...r.kernel.org>
Subject: RE: Affinity managed interrupts vs non-managed interrupts
On Mon, 3 Sep 2018, Kashyap Desai wrote:
> I am using " for-4.19/block " and this particular patch "a0c9259
> irq/matrix: Spread interrupts on allocation" is included.
Can you please try against 4.19-rc2 or later?
> I can see that 16 extra reply queues via pre_vectors are still assigned to
> CPU 0 (effective affinity ).
>
> irq 33, cpu list 0-71
The cpu list is irrelevant because that's the allowed affinity mask. The
effective one is what counts.
> # cat /sys/kernel/debug/irq/irqs/34
> node: 0
> affinity: 0-71
> effectiv: 0
So if all 16 have their effective affinity set to CPU0 then that's strange
at least.
Can you please provide the output of /sys/kernel/debug/irq/domains/VECTOR ?
> Ideally, what we are looking for 16 extra pre_vector reply queue is
> "effective affinity" to be within local numa node as long as that numa
> node has online CPUs. If not, we are ok to have effective cpu from any
> node.
Well, we surely can do the initial allocation and spreading on the local
numa node, but once all CPUs are offline on that node, then the whole thing
goes down the drain and allocates from where it sees fit. I'll think about
it some more, especially how to avoid the proliferation of the affinity
hint.
Thanks,
tglx
Powered by blists - more mailing lists