[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6396223c-6008-0e1b-e6ed-79c04c87a5e0@redhat.com>
Date: Wed, 26 Jul 2023 11:22:40 +0200
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Haiyang Zhang <haiyangz@...rosoft.com>,
Jesper Dangaard Brouer <jbrouer@...hat.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Cc: brouer@...hat.com, Dexuan Cui <decui@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>, Paul Rosswurm <paulros@...rosoft.com>,
"olaf@...fle.de" <olaf@...fle.de>, "vkuznets@...hat.com"
<vkuznets@...hat.com>, "davem@...emloft.net" <davem@...emloft.net>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>, "pabeni@...hat.com"
<pabeni@...hat.com>, "leon@...nel.org" <leon@...nel.org>,
Long Li <longli@...rosoft.com>,
"ssengar@...ux.microsoft.com" <ssengar@...ux.microsoft.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"john.fastabend@...il.com" <john.fastabend@...il.com>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>, "ast@...nel.org"
<ast@...nel.org>, Ajay Sharma <sharmaajay@...rosoft.com>,
"hawk@...nel.org" <hawk@...nel.org>, "tglx@...utronix.de"
<tglx@...utronix.de>,
"shradhagupta@...ux.microsoft.com" <shradhagupta@...ux.microsoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH V3,net-next] net: mana: Add page pool for RX buffers
On 25/07/2023 21.02, Haiyang Zhang wrote:
>
>> -----Original Message-----
>> From: Jesper Dangaard Brouer <jbrouer@...hat.com>
>> Sent: Tuesday, July 25, 2023 2:01 PM
>>>>
>>>> Our driver is using NUMA 0 by default, so I implicitly assign NUMA node id
>>>> to zero during pool init.
>>>>
>>>> And, if the IRQ/CPU affinity is changed, the page_pool_nid_changed()
>>>> will update the nid for the pool. Does this sound good?
>>>>
>>>
>>> Also, since our driver is getting the default node from here:
>>> gc->numa_node = dev_to_node(&pdev->dev);
>>> I will update this patch to set the default node as above, instead of implicitly
>>> assigning it to 0.
>>>
>>
>> In that case, I agree that it make sense to use dev_to_node(&pdev->dev),
>> like:
>> pprm.nid = dev_to_node(&pdev->dev);
>>
>> Driver must have a reason for assigning gc->numa_node for this hardware,
>> which is okay. That is why page_pool API allows driver to control this.
>>
>> But then I don't think you should call page_pool_nid_changed() like
>>
>> page_pool_nid_changed(rxq->page_pool, numa_mem_id());
>>
>> Because then you will (at first packet processing event) revert the
>> dev_to_node() setting to use numa_mem_id() of processing/running CPU.
>> (In effect this will be the same as setting NUMA_NO_NODE).
>>
>> I know, mlx5 do call page_pool_nid_changed(), but they showed benchmark
>> numbers that this was preferred action, even-when sysadm had
>> "misconfigured" the default smp_affinity RX-processing to happen on a
>> remote NUMA node. AFAIK mlx5 keeps the descriptor rings on the
>> originally configured NUMA node that corresponds to the NIC PCIe slot.
>
> In mana_gd_setup_irqs(), we set the default IRQ/CPU affinity to gc->numa_node
> too, so it won't revert the nid initial setting.
>
> Currently, the Azure hypervisor always indicates numa 0 as default. (In
> the future, it will start to provide the accurate default dev node.) When a
> user manually changes the IRQ/CPU affinity for perf tuning, we want to
> allow page_pool_nid_changed() to update the pool. Is this OK?
>
If I were you, I would wait with the page_pool_nid_changed()
"optimization" and do a benchmark mark to see if this actually have a
benefit. (You can do this in another patch). (In a Azure hypervisor
environment is might not be the right choice).
This reminds me, do you have any benchmark data on the improvement this
patch (using page_pool) gave?
--Jesper
Powered by blists - more mailing lists