[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <PH7PR21MB3116C4C749C4E915CADEE273CA00A@PH7PR21MB3116.namprd21.prod.outlook.com>
Date: Wed, 26 Jul 2023 15:51:32 +0000
From: Haiyang Zhang <haiyangz@...rosoft.com>
To: Jesper Dangaard Brouer <jbrouer@...hat.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: "brouer@...hat.com" <brouer@...hat.com>,
Dexuan Cui <decui@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>,
Paul Rosswurm <paulros@...rosoft.com>,
"olaf@...fle.de" <olaf@...fle.de>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"leon@...nel.org" <leon@...nel.org>,
Long Li <longli@...rosoft.com>,
"ssengar@...ux.microsoft.com" <ssengar@...ux.microsoft.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"john.fastabend@...il.com" <john.fastabend@...il.com>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"ast@...nel.org" <ast@...nel.org>,
Ajay Sharma <sharmaajay@...rosoft.com>,
"hawk@...nel.org" <hawk@...nel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"shradhagupta@...ux.microsoft.com" <shradhagupta@...ux.microsoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>
Subject: RE: [PATCH V3,net-next] net: mana: Add page pool for RX buffers
> -----Original Message-----
> From: Jesper Dangaard Brouer <jbrouer@...hat.com>
> Sent: Wednesday, July 26, 2023 5:23 AM
> >
> > In mana_gd_setup_irqs(), we set the default IRQ/CPU affinity to gc-
> >numa_node
> > too, so it won't revert the nid initial setting.
> >
> > Currently, the Azure hypervisor always indicates numa 0 as default. (In
> > the future, it will start to provide the accurate default dev node.) When a
> > user manually changes the IRQ/CPU affinity for perf tuning, we want to
> > allow page_pool_nid_changed() to update the pool. Is this OK?
> >
>
> If I were you, I would wait with the page_pool_nid_changed()
> "optimization" and do a benchmark mark to see if this actually have a
> benefit. (You can do this in another patch). (In a Azure hypervisor
> environment is might not be the right choice).
Ok, I will submit a patch without the page_pool_nid_changed() optimization
for now, and will do more testing on this.
> This reminds me, do you have any benchmark data on the improvement this
> patch (using page_pool) gave?
With iperf and 128 threads test, this patch improved the throughput by 12-15%,
and decreased the IRQ associated CPU's usage from 99-100% to 10-50%.
Thanks,
- Haiyang
Powered by blists - more mailing lists