[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a22aaa72-4f5f-40d4-33e0-0aff8b65fdc2@huawei.com>
Date: Fri, 15 May 2020 12:50:06 +0100
From: John Garry <john.garry@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Jason Cooper <jason@...edaemon.net>,
"chenxiang (M)" <chenxiang66@...ilicon.com>,
Robin Murphy <robin.murphy@....com>,
luojiaxing <luojiaxing@...wei.com>,
Ming Lei <ming.lei@...hat.com>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH v3 0/2] irqchip/gic-v3-its: Balance LPI affinity across
CPUs
Hi Marc,
> Absolutely. Life has got in the way, so let me page it back in...
Great
>>
>> [PATCH 2/2] irqchip/gic-v3-its: Handle no overlap of non-managed irq
>> affinity mask
>>
>> In selecting the target CPU for a non-managed interrupt, we may select
>> a
>> target CPU outside the requested affinity mask.
>>
>> This is because there may be no overlap of the ITS node mask and the
>> requested CPU affinity mask. The requested affinity mask may be coming
>> from userspace or some drivers which try to set irq affinity, see [0].
>>
>> In this case, just ignore the ITS node cpumask. This is a deviation
>> from
>> what Thomas described. Having said that, I am not sure if the
>> interrupt is ever bound to a node for us.
>>
>> [0]
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/perf/hisilicon/hisi_uncore_pmu.c#n417
>>
>> ---
>> drivers/irqchip/irq-gic-v3-its.c | 4 ----
>> 1 file changed, 4 deletions(-)
>>
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c
>> b/drivers/irqchip/irq-gic-v3-its.c
>> index 2b18feb..12d5d4b4 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -1584,10 +1584,6 @@ static int its_select_cpu(struct irq_data *d,
>> cpumask_and(tmpmask, cpumask_of_node(node), aff_mask);
>> cpumask_and(tmpmask, tmpmask, cpu_online_mask);
>>
>> - /* If that doesn't work, try the nodemask itself */
>> - if (cpumask_empty(tmpmask))
>> - cpumask_and(tmpmask, cpumask_of_node(node), cpu_online_mask);
>> -
>> cpu = cpumask_pick_least_loaded(d, tmpmask);
>> if (cpu < nr_cpu_ids)
>> goto out;
>
> I'm really not sure. Shouldn't we then drop the wider search on
> cpu_inline_mask, because userspace could have given us something
> that we cannot deal with?
It's not just userspace. Some drivers call irq_set_affinity{_hint}}()
also, with a non-overlapping affinity mask.
We could just error these requests, but some drivers rely on this
behavior. Consider the uncore driver I mentioned above, which WARNs when
the affinity setting fails. So it tries to set the affinity with the
cpumask of the cluster associated with the device, but with D06's ITS
config, below, there may be no overlap.
>
> What you are advocating for is a strict adherence to the provided
> mask, and it doesn't seem to be what other architectures are providing.
> I consider the userspace-provided affinity as a hint more that anything
> else, as in this case the kernel does know better (routing the interrupt
> to a foreign node might be costly, or even impossible, see the TX1
> erratum).
Right
>
> From what I remember of the earlier discussion, you saw an issue on
> a system with two sockets and a single ITS, with the node mask limited
> to the first socket. Is that correct?
A bit more complicated: 2 sockets, 2 NUMA nodes per socket, and ITS
config as follows:
D06ES 1x ITS with proximity node #0
root@(none)$ dmesg | grep ITS
[ 0.000000] SRAT: PXM 0 -> ITS 0 -> Node 0
D06CS
2x ITS with proximity node #0, #2
estuary:/$ dmesg | grep ITS
[ 0.000000] SRAT: PXM 0 -> ITS 0 -> Node 0
[ 0.000000] SRAT: PXM 2 -> ITS 1 -> Node 2
It complicates things.
We could add extra intelligence to record if an node has an ITS
associated. In the case of that not being true, we would fallback on the
requested affin only (for case of no overlap). It gets a bit more messy
then.
Thanks,
John
Powered by blists - more mailing lists