[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a5132a5f-efe8-4305-07dd-d120b51b1360@huawei.com>
Date: Wed, 18 Mar 2020 19:00:27 +0000
From: John Garry <john.garry@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: Jason Cooper <jason@...edaemon.net>,
luojiaxing <luojiaxing@...wei.com>,
<linux-kernel@...r.kernel.org>, Ming Lei <ming.lei@...hat.com>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>,
Thomas Gleixner <tglx@...utronix.de>,
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v3 2/2] irqchip/gic-v3-its: Balance initial LPI affinity
across CPUs
Hi Marc,
>> And for some reason fancied cpu62.
>
> Hmmm. OK. I'm surprised that irqbalance dries to set a range of CPUs,
> instead of
> a particular CPU though.
It does seem strange. But also quite consistent. I will check again on that.
>>>
>>> But it has the mask for CPUs that are best suited for this interrupt,
>>> right?
>>> If I understand the topology of your machine, it has an ITS per 64 CPUs,
>>> and
>>> this device is connected to the ITS that serves the second socket.
>>
>> No, this one (D06ES) has a single ITS:
>>
>> john@...ntu:~/kernel-dev$ dmesg | grep ITS
>> [ 0.000000] SRAT: PXM 0 -> ITS 0 -> Node 0
>> [ 0.000000] ITS [mem 0x202100000-0x20211ffff]
>> [ 0.000000] ITS@...000000202100000: Using ITS number 0
>> [ 0.000000] ITS@...000000202100000: allocated 8192 Devices
>> @23ea9f0000 (indirect, esz 8, psz 16K, shr 1)
>> [ 0.000000] ITS@...000000202100000: allocated 2048 Virtual CPUs
>> @23ea9d8000 (indirect, esz 16, psz 4K, shr 1)
>> [ 0.000000] ITS@...000000202100000: allocated 256 Interrupt
>> Collections @23ea9d3000 (flat, esz 16, psz 4K, shr 1)
>> [ 0.000000] ITS: Using DirectLPI for VPE invalidation
>> [ 0.000000] ITS: Enabling GICv4 support
>> [ 0.044034] Platform MSI: ITS@...02100000 domain created
>> [ 0.044042] PCI/MSI: ITS@...02100000 domain created
>
> There's something I'm missing here. If there's a single ITS in the system,
> node affinity must cover the whole system, not half of it.
>
>> D06CS has 2x ITS, as you may know :)
>>
>> And, FWIW, the device is on the 2nd socket, numa node #2.
>
> You've lost me. Single ITS, but two sockets?
Yeah, right, so I think that a single ITS is used due to some HW bug in
the ES chip, fixed in the CS chip.
And some more background on the D05, D06ES, D06CS topology:
Even though the system is 2x socket, we model as 4x NUMA nodes, i.e. 2x
nodes per socket. This is because each node has an associated memory
controller in the socket, i.e. 2x memory controllers per socket. As
such, for this D06ES system, a NUMA node is 24 cores.
I will be the first to admit that it does make things more complicated.
Even more especially (and arguably broken) when we need to assign a
proximity domain to devices in either socket, considering they are
equidistant from either memory controller/CPU cluster in that socket.
>
>>
>> So the cpu mask of node #0 (where the ITS lives) is 0-23. So no
>> intersection with what userspace requested.
>>
>>>> if (cpu < 0 || cpu >= nr_cpu_ids)
>>>> return -EINVAL;
>>>>
>>>> if (cpu != its_dev->event_map.col_map[id]) {
>>>> its_inc_lpi_count(d, cpu);
>>>> its_dec_lpi_count(d, its_dev->event_map.col_map[id]);
>>>> target_col = &its_dev->its->collections[cpu];
>>>> its_send_movi(its_dev, target_col, id);
>>>> its_dev->event_map.col_map[id] = cpu;
>>>> irq_data_update_effective_affinity(d, cpumask_of(cpu));
>>>> }
>>>>
>>>> So cpu may not be a member of mask_val. Hence the inconsistency of the
>>>> affinity list and effective affinity. We could just drop the AND of
>>>> the ITS node mask in its_select_cpu().
>>>
>>> That would be a departure from the algorithm Thomas proposed, which made
>>> a lot of sense in my opinion. What its_select_cpu() does in this case is
>>> probably the best that can be achieved from a latency perspective,
>>> as it keeps the interrupt local to the socket that generated it.
>>
>> We seem to be following what Thomas described for a non-managed
>> interrupt bound to a node. But is this interrupt bound to the node?
>
> If the ITS advertizes affinity to a node (through SRAT, for example),
> we should use that. And that's what we have in this patch.
Right, but my system is incompatible. Reason being, SRAT says ITS is
NUMA node #0 (I think choosing node #0 over #1 may be just arbitrary),
and the cpu mask for NUMA node #0 is 0-23, as above. And I figure even
for D06CS with 2x ITS, again, is incompatible for the same reason.
So your expectation for a single ITS system would be that the NUMA node
cpu mask for the ITS would cover all cpus. Sadly, it doesn't here...
Much appreciated,
John
Powered by blists - more mailing lists