[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7ef3810-31af-013a-6d18-ceb6154aa2ef@huawei.com>
Date: Fri, 13 Dec 2019 17:50:47 +0000
From: John Garry <john.garry@...wei.com>
To: Ming Lei <ming.lei@...hat.com>
CC: "tglx@...utronix.de" <tglx@...utronix.de>,
"chenxiang (M)" <chenxiang66@...ilicon.com>,
"bigeasy@...utronix.de" <bigeasy@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"maz@...nel.org" <maz@...nel.org>, "hare@...e.com" <hare@...e.com>,
"hch@....de" <hch@....de>, "axboe@...nel.dk" <axboe@...nel.dk>,
"bvanassche@....org" <bvanassche@....org>,
"peterz@...radead.org" <peterz@...radead.org>,
"mingo@...hat.com" <mingo@...hat.com>
Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity
for managed interrupt
On 13/12/2019 17:12, Ming Lei wrote:
>> pu list 80-83, effective list 81
>> irq 97, cpu list 84-87, effective list 86
>> irq 98, cpu list 88-91, effective list 89
>> irq 99, cpu list 92-95, effective list 93
>> john@...ntu:~$
>>
>> I'm now thinking that we should just attempt this intelligent CPU affinity
>> assignment for managed interrupts.
> Right, the rule is simple: distribute effective list among CPUs evenly,
> meantime select the effective CPU from the irq's affinity mask.
>
Even if we fix that, there is still a potential to have a CPU handling
multiple nvme completion queues due to many factors, like cpu count,
probe ordering, other PCI endpoints in the system, etc, so this lockup
needs to be remedied.
Thanks,
John
Powered by blists - more mailing lists