[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f22e87c1-6875-9a59-94db-42a6f71663b8@sandisk.com>
Date: Thu, 16 Jun 2016 17:39:07 +0200
From: Bart Van Assche <bart.vanassche@...disk.com>
To: Christoph Hellwig <hch@....de>
CC: Keith Busch <keith.busch@...el.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"axboe@...com" <axboe@...com>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag
On 06/16/2016 05:20 PM, Christoph Hellwig wrote:
> On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
>> Do you agree that - ignoring other interrupt assignments - that the latter
>> interrupt assignment scheme would result in higher throughput and lower
>> interrupt processing latency?
>
> Probably. Once we've got it in the core IRQ code we can tweak the
> algorithm to be optimal.
Sorry but I'm afraid that we are embedding policy in the kernel,
something we should not do. I know that there are workloads for which
dedicating some CPU cores to interrupt processing and other CPU cores to
running kernel threads improves throughput, probably because this
results in less cache eviction on the CPU cores that run kernel threads
and some degree of interrupt coalescing on the CPU cores that process
interrupts. My concern is that I doubt that there is an interrupt
assignment scheme that works optimally for all workloads. Hence my
request to preserve the ability to modify interrupt affinity from user
space.
Bart.
Powered by blists - more mailing lists