[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220423054331.GA17823@lst.de>
Date: Sat, 23 Apr 2022 07:43:31 +0200
From: Christoph Hellwig <hch@....de>
To: "brookxu.cn" <brookxu.cn@...il.com>
Cc: kbusch@...nel.org, axboe@...com, hch@....de, sagi@...mberg.me,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-pci@...r.kernel.org, tglx@...utronix.de, frederic@...nel.org
Subject: Re: [RFC PATCH] nvme-pci: allowed to modify IRQ affinity in
latency sensitive scenarios
On Fri, Apr 22, 2022 at 06:58:26PM +0800, brookxu.cn wrote:
> From: Chunguang Xu <brookxu@...cent.com>
>
> In most cases, setting the affinity through managed IRQ is a better
> choice. But in some scenarios that use isolcpus, such as DPDK, because
> managed IRQ does not distinguish between housekeeping CPU and isolated
> CPU when selecting CPU, this will cause IO interrupts triggered by
> housekeeping CPU to be routed to isolated CPU, which will affect the
> tasks running on isolated CPU. commit 11ea68f553e2 ("genirq,
> sched/isolation: Isolate from handling managed interrupts") tries to
> fix this in a best effort way. However, in a real production environment,
> latency-sensitive business needs more of a deterministic result. So,
> similar to the mpt3sas driver, we might can add a module parameter
> smp_affinity_enable to the Nvme driver.
This kind of boilerplate code in random drivers is not sustainable.
I really think we need to handle this whole housekeeping CPU case in
common code. That is designed CPUs as housekeeping vs non-housekeeping
and let the generic affinity assignment code deal with it and solve
it for all drivers using the proper affinity masks instead of having
random slighty overrides in all drivers anyone ever wants to use in
such a system.
Powered by blists - more mailing lists