[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <214947849a681fc702d018383a3f95ac@www.loen.fr>
Date: Fri, 13 Dec 2019 10:31:10 +0000
From: Marc Zyngier <maz@...nel.org>
To: John Garry <john.garry@...wei.com>
Cc: Ming Lei <ming.lei@...hat.com>, <tglx@...utronix.de>,
"chenxiang (M)" <chenxiang66@...ilicon.com>,
<bigeasy@...utronix.de>, <linux-kernel@...r.kernel.org>,
<hare@...e.com>, <hch@....de>, <axboe@...nel.dk>,
<bvanassche@....org>, <peterz@...radead.org>, <mingo@...hat.com>
Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt
Hi John,
On 2019-12-13 10:07, John Garry wrote:
> On 11/12/2019 09:41, John Garry wrote:
>> On 10/12/2019 18:32, Marc Zyngier wrote:
>>>>>> The ITS code will make the lowest online CPU in the affinity
>>>>>> mask
>>>>>> the
>>>>>> target CPU for the interrupt, which may result in some CPUs
>>>>>> handling
>>>>>> so many interrupts.
>>>>> If what you want is for the*default* affinity to be spread
>>>>> around,
>>>>> that should be achieved pretty easily. Let me have a think about
>>>>> how
>>>>> to do that.
>>>> Cool, I anticipate that it should help my case.
>>>>
>>>> I can also seek out some NVMe cards to see how it would help a
>>>> more
>>>> "generic" scenario.
>>> Can you give the following a go? It probably has all kind of warts
>>> on
>>> top of the quality debug information, but I managed to get my D05
>>> and
>>> a couple of guests to boot with it. It will probably eat your data,
>>> so use caution!;-)
>>>
>> Hi Marc,
>> Ok, we'll give it a spin.
>> Thanks,
>> John
>
> Hi Marc,
>
> JFYI, we're still testing this and the patch itself seems to work as
> intended.
>
> Here's the kernel log if you just want to see how the interrupts are
> getting assigned:
> https://pastebin.com/hh3r810g
It is a bit hard to make sense of this dump, specially on such a wide
machine (I want one!) without really knowing the topology of the
system.
> For me, I did get a performance boost for NVMe testing, but my
> colleague Xiang Chen saw a drop for our storage test of interest -
> that's the HiSi SAS controller. We're trying to make sense of it now.
One of the difference is that with this patch, the initial affinity
is picked inside the NUMA node that matches the ITS. In your case,
that's either node 0 or 2. But it is unclear whether which CPUs these
map to.
Given that I see interrupts mapped to CPUs 0-23 on one side, and 48-71
on the other, it looks like half of your machine gets starved, and that
may be because no ITS targets the NUMA nodes they are part of. It would
be interesting to see what happens if you manually set the affinity
of the interrupts outside of the NUMA node.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists