[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44a4e759-02fc-4015-90a8-c41eb7cb3dc1@gmail.com>
Date: Mon, 27 Nov 2023 11:07:31 -0800
From: Florian Fainelli <f.fainelli@...il.com>
To: Souradeep Chakrabarti <schakrabarti@...rosoft.com>,
Jakub Kicinski <kuba@...nel.org>,
Souradeep Chakrabarti <schakrabarti@...ux.microsoft.com>
Cc: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"pabeni@...hat.com" <pabeni@...hat.com>,
Long Li <longli@...rosoft.com>,
"sharmaajay@...rosoft.com" <sharmaajay@...rosoft.com>,
"leon@...nel.org" <leon@...nel.org>,
"cai.huoqing@...ux.dev" <cai.huoqing@...ux.dev>,
"ssengar@...ux.microsoft.com" <ssengar@...ux.microsoft.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Paul Rosswurm <paulros@...rosoft.com>
Subject: Re: [EXTERNAL] Re: [PATCH V2 net-next] net: mana: Assigning IRQ
affinity on HT cores
On 11/27/23 01:36, Souradeep Chakrabarti wrote:
>
>
>> -----Original Message-----
>> From: Jakub Kicinski <kuba@...nel.org>
>> Sent: Wednesday, November 22, 2023 5:19 AM
>> To: Souradeep Chakrabarti <schakrabarti@...ux.microsoft.com>
>> Cc: KY Srinivasan <kys@...rosoft.com>; Haiyang Zhang
>> <haiyangz@...rosoft.com>; wei.liu@...nel.org; Dexuan Cui
>> <decui@...rosoft.com>; davem@...emloft.net; edumazet@...gle.com;
>> pabeni@...hat.com; Long Li <longli@...rosoft.com>;
>> sharmaajay@...rosoft.com; leon@...nel.org; cai.huoqing@...ux.dev;
>> ssengar@...ux.microsoft.com; vkuznets@...hat.com; tglx@...utronix.de; linux-
>> hyperv@...r.kernel.org; netdev@...r.kernel.org; linux-kernel@...r.kernel.org;
>> linux-rdma@...r.kernel.org; Souradeep Chakrabarti
>> <schakrabarti@...rosoft.com>; Paul Rosswurm <paulros@...rosoft.com>
>> Subject: [EXTERNAL] Re: [PATCH V2 net-next] net: mana: Assigning IRQ affinity on
>> HT cores
>>
>> On Tue, 21 Nov 2023 05:54:37 -0800 Souradeep Chakrabarti wrote:
>>> Existing MANA design assigns IRQ to every CPUs, including sibling
>>> hyper-threads in a core. This causes multiple IRQs to work on same CPU
>>> and may reduce the network performance with RSS.
>>>
>>> Improve the performance by adhering the configuration for RSS, which
>>> assigns IRQ on HT cores.
>>
>> Drivers should not have to carry 120 LoC for something as basic as spreading IRQs.
>> Please take a look at include/linux/topology.h and if there's nothing that fits your
>> needs there - add it. That way other drivers can reuse it.
> Because of the current design idea, it is easier to keep things inside
> the mana driver code here. As the idea of IRQ distribution here is :
> 1)Loop through interrupts to assign CPU
> 2)Find non sibling online CPU from local NUMA and assign the IRQs
> on them.
> 3)If number of IRQs is more than number of non-sibling CPU in that
> NUMA node, then assign on sibling CPU of that node.
> 4)Keep doing it till all the online CPUs are used or no more IRQs.
> 5)If all CPUs in that node are used, goto next NUMA node with CPU.
> Keep doing 2 and 3.
> 6) If all CPUs in all NUMA nodes are used, but still there are IRQs
> then wrap over from first local NUMA node and continue
> doing 2, 3 4 till all IRQs are assigned.
You are describing the logic of what is done by the driver which is not
responding to Jakub's comment. His request is to consider coming up with
at least a somewhat usable and generic helper for other drivers to use.
This also begs the obvious question: why is all of this in the kernel in
the first place? What could not be accomplished by an initramfs/ramdisk
with minimal user-space responsible for parsing the system node(s)
topology and CPU and assign interrupts accordingly?
We all like when things "automagically" work but this is conflating
mechanism (supporting interrupt affinities) with policy (assigning
affinities based upon work load) and that never flies really well.
--
Florian
Powered by blists - more mailing lists