[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PH7PR21MB3116F5612AA8303512EEBA4CCA03A@PH7PR21MB3116.namprd21.prod.outlook.com>
Date: Tue, 25 Jul 2023 19:02:10 +0000
From: Haiyang Zhang <haiyangz@...rosoft.com>
To: Jesper Dangaard Brouer <jbrouer@...hat.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: "brouer@...hat.com" <brouer@...hat.com>, Dexuan Cui <decui@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>, Paul Rosswurm <paulros@...rosoft.com>,
"olaf@...fle.de" <olaf@...fle.de>, "vkuznets@...hat.com"
<vkuznets@...hat.com>, "davem@...emloft.net" <davem@...emloft.net>,
"wei.liu@...nel.org" <wei.liu@...nel.org>, "edumazet@...gle.com"
<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, "leon@...nel.org" <leon@...nel.org>,
Long Li <longli@...rosoft.com>, "ssengar@...ux.microsoft.com"
<ssengar@...ux.microsoft.com>, "linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>, "daniel@...earbox.net" <daniel@...earbox.net>,
"john.fastabend@...il.com" <john.fastabend@...il.com>, "bpf@...r.kernel.org"
<bpf@...r.kernel.org>, "ast@...nel.org" <ast@...nel.org>, Ajay Sharma
<sharmaajay@...rosoft.com>, "hawk@...nel.org" <hawk@...nel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>, "shradhagupta@...ux.microsoft.com"
<shradhagupta@...ux.microsoft.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, Ilias Apalodimas
<ilias.apalodimas@...aro.org>
Subject: RE: [PATCH V3,net-next] net: mana: Add page pool for RX buffers
> -----Original Message-----
> From: Jesper Dangaard Brouer <jbrouer@...hat.com>
> Sent: Tuesday, July 25, 2023 2:01 PM
> >>
> >> Our driver is using NUMA 0 by default, so I implicitly assign NUMA node id
> >> to zero during pool init.
> >>
> >> And, if the IRQ/CPU affinity is changed, the page_pool_nid_changed()
> >> will update the nid for the pool. Does this sound good?
> >>
> >
> > Also, since our driver is getting the default node from here:
> > gc->numa_node = dev_to_node(&pdev->dev);
> > I will update this patch to set the default node as above, instead of implicitly
> > assigning it to 0.
> >
>
> In that case, I agree that it make sense to use dev_to_node(&pdev->dev),
> like:
> pprm.nid = dev_to_node(&pdev->dev);
>
> Driver must have a reason for assigning gc->numa_node for this hardware,
> which is okay. That is why page_pool API allows driver to control this.
>
> But then I don't think you should call page_pool_nid_changed() like
>
> page_pool_nid_changed(rxq->page_pool, numa_mem_id());
>
> Because then you will (at first packet processing event) revert the
> dev_to_node() setting to use numa_mem_id() of processing/running CPU.
> (In effect this will be the same as setting NUMA_NO_NODE).
>
> I know, mlx5 do call page_pool_nid_changed(), but they showed benchmark
> numbers that this was preferred action, even-when sysadm had
> "misconfigured" the default smp_affinity RX-processing to happen on a
> remote NUMA node. AFAIK mlx5 keeps the descriptor rings on the
> originally configured NUMA node that corresponds to the NIC PCIe slot.
In mana_gd_setup_irqs(), we set the default IRQ/CPU affinity to gc->numa_node
too, so it won't revert the nid initial setting.
Currently, the Azure hypervisor always indicates numa 0 as default. (In
the future, it will start to provide the accurate default dev node.) When a
user manually changes the IRQ/CPU affinity for perf tuning, we want to
allow page_pool_nid_changed() to update the pool. Is this OK?
Thanks,
- Haiyang
Powered by blists - more mailing lists