[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240313200545.656f6dd6@hermes.local>
Date: Wed, 13 Mar 2024 20:05:45 -0700
From: Stephen Hemminger <stephen@...workplumber.org>
To: Shradha Gupta <shradhagupta@...ux.microsoft.com>
Cc: Jakub Kicinski <kuba@...nel.org>, Haiyang Zhang
<haiyangz@...rosoft.com>, Shradha Gupta <shradhagupta@...rosoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Ajay Sharma
<sharmaajay@...rosoft.com>, Leon Romanovsky <leon@...nel.org>, Thomas
Gleixner <tglx@...utronix.de>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>, KY Srinivasan <kys@...rosoft.com>, Wei Liu
<wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>, Long Li
<longli@...rosoft.com>, Michael Kelley <mikelley@...rosoft.com>
Subject: Re: [PATCH] net :mana : Add per-cpu stats for MANA device
On Wed, 13 Mar 2024 19:57:20 -0700
Shradha Gupta <shradhagupta@...ux.microsoft.com> wrote:
> Following is the data we can share:
>
> Default interrupts affinity for each queue:
>
> 25: 1 103 0 2989138 Hyper-V PCIe MSI 4138200989697-edge mana_q0@pci:7870:00:00.0
> 26: 0 1 4005360 0 Hyper-V PCIe MSI 4138200989698-edge mana_q1@pci:7870:00:00.0
> 27: 0 0 1 2997584 Hyper-V PCIe MSI 4138200989699-edge mana_q2@pci:7870:00:00.0
> 28: 3565461 0 0 1 Hyper-V PCIe MSI 4138200989700-edge mana_q3
> @pci:7870:00:00.0
>
> As seen the CPU-queue mapping is not 1:1, Queue 0 and Queue 2 are both mapped
> to cpu3. From this knowledge we can figure out the total RX stats processed by
> each CPU by adding the values of mana_q0 and mana_q2 stats for cpu3. But if
> this data changes dynamically using irqbalance or smp_affinity file edits, the
> above assumption fails.
irqbalance is often a bad idea.
In the past, doing one shot balancing at startup was a better plan.
Powered by blists - more mailing lists