[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e513d25d8b0c6b95d37a64df0c27b78@www.loen.fr>
Date: Tue, 10 Dec 2019 11:36:50 +0000
From: Marc Zyngier <maz@...nel.org>
To: John Garry <john.garry@...wei.com>
Cc: Ming Lei <ming.lei@...hat.com>, <tglx@...utronix.de>,
<chenxiang66@...ilicon.com>, <bigeasy@...utronix.de>,
<linux-kernel@...r.kernel.org>, <hare@...e.com>, <hch@....de>,
<axboe@...nel.dk>, <bvanassche@....org>, <peterz@...radead.org>,
<mingo@...hat.com>
Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt
On 2019-12-10 10:59, John Garry wrote:
>>>
>>> There is no lockup, just a potential performance boost in this
>>> change.
>>>
>>> My colleague Xiang Chen can provide specifics of the test, as he is
>>> the one running it.
>>>
>>> But one key bit of info - which I did not think most relevant
>>> before
>>> - that is we have 2x SAS controllers running the throughput test on
>>> the same host.
>>>
>>> As such, the completion queue interrupts would be spread
>>> identically
>>> over the CPUs for each controller. I notice that ARM GICv3 ITS
>>> interrupt controller (which we use) does not use the generic irq
>>> matrix allocator, which I think would really help with this.
>>>
>>> Hi Marc,
>>>
>>> Is there any reason for which we couldn't utilise of the generic
>>> irq
>>> matrix allocator for GICv3?
>>
>
> Hi Marc,
>
>> For a start, the ITS code predates the matrix allocator by about
>> three
>> years. Also, my understanding of this allocator is that it allows
>> x86 to cope with a very small number of possible interrupt vectors
>> per CPU. The ITS doesn't have such issue, as:
>> 1) the namespace is global, and not per CPU
>> 2) the namespace is *huge*
>> Now, what property of the matrix allocator is the ITS code missing?
>> I'd be more than happy to improve it.
>
> I think specifically the property that the matrix allocator will try
> to find a CPU for irq affinity which "has the lowest number of
> managed
> IRQs allocated" - I'm quoting the comment on
> matrix_find_best_cpu_managed().
But that decision is due to allocation constraints. You can have at
most
256 interrupts per CPU, so the allocator tries to balance it.
On the contrary, the ITS does care about how many interrupt target any
given CPU. The whole 2^24 interrupt namespace can be thrown at a single
CPU.
> The ITS code will make the lowest online CPU in the affinity mask the
> target CPU for the interrupt, which may result in some CPUs handling
> so many interrupts.
If what you want is for the *default* affinity to be spread around,
that should be achieved pretty easily. Let me have a think about how
to do that.
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists