[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191207080335.GA6077@ming.t460p>
Date: Sat, 7 Dec 2019 16:03:35 +0800
From: Ming Lei <ming.lei@...hat.com>
To: John Garry <john.garry@...wei.com>
Cc: tglx@...utronix.de, chenxiang66@...ilicon.com,
bigeasy@...utronix.de, linux-kernel@...r.kernel.org,
maz@...nel.org, hare@...e.com, hch@....de, axboe@...nel.dk,
bvanassche@....org, peterz@...radead.org, mingo@...hat.com
Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity
for managed interrupt
On Fri, Dec 06, 2019 at 10:35:04PM +0800, John Garry wrote:
> Currently the cpu allowed mask for the threaded part of a threaded irq
> handler will be set to the effective affinity of the hard irq.
>
> Typically the effective affinity of the hard irq will be for a single cpu. As such,
> the threaded handler would always run on the same cpu as the hard irq.
>
> We have seen scenarios in high data-rate throughput testing that the cpu
> handling the interrupt can be totally saturated handling both the hard
> interrupt and threaded handler parts, limiting throughput.
Frankly speaking, I never observed that single CPU is saturated by one storage
completion queue's interrupt load. Because CPU is still much quicker than
current storage device.
If there are more drives, one CPU won't handle more than one queue(drive)'s
interrupt if (nr_drive * nr_hw_queues) < nr_cpu_cores.
So could you describe your case in a bit detail? Then we can confirm
if this change is really needed.
>
> For when the interrupt is managed, allow the threaded part to run on all
> cpus in the irq affinity mask.
I remembered that performance drop is observed by this approach in some
test.
Thanks,
Ming
Powered by blists - more mailing lists