[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44023e6f-5a52-4681-84fc-dd623cd9f09d@huawei.com>
Date: Tue, 15 Oct 2024 19:41:26 +0800
From: Jijie Shao <shaojijie@...wei.com>
To: Paolo Abeni <pabeni@...hat.com>, <davem@...emloft.net>,
<edumazet@...gle.com>, <kuba@...nel.org>
CC: <shaojijie@...wei.com>, <shenjian15@...wei.com>,
<wangpeiyang1@...wei.com>, <liuyonglong@...wei.com>, <chenhao418@...wei.com>,
<sudongming1@...wei.com>, <xujunsheng@...wei.com>, <shiyongbang@...wei.com>,
<libaihan@...wei.com>, <andrew@...n.ch>, <jdamato@...tly.com>,
<horms@...nel.org>, <kalesh-anakkur.purayil@...adcom.com>,
<christophe.jaillet@...adoo.fr>, <jonathan.cameron@...wei.com>,
<shameerali.kolothum.thodi@...wei.com>, <salil.mehta@...wei.com>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V12 net-next 07/10] net: hibmcge: Implement rx_poll
function to receive packets
on 2024/10/15 18:28, Paolo Abeni wrote:
> On 10/10/24 16:21, Jijie Shao wrote:
>> @@ -124,6 +129,20 @@ static void hbg_buffer_free_skb(struct
>> hbg_buffer *buffer)
>> buffer->skb = NULL;
>> }
>> +static int hbg_buffer_alloc_skb(struct hbg_buffer *buffer)
>> +{
>> + u32 len = hbg_spec_max_frame_len(buffer->priv, buffer->dir);
>> + struct hbg_priv *priv = buffer->priv;
>> +
>> + buffer->skb = netdev_alloc_skb(priv->netdev, len);
>> + if (unlikely(!buffer->skb))
>> + return -ENOMEM;
>
> It looks like I was not clear enough in my previous feedback:
> allocating the sk_buff struct at packet reception time, will be much
> more efficient, because the sk_buff contents will be hot in cache for
> the RX path, while allocating it here, together with the data pointer
> itself will almost ensure 2-4 cache misses per RX packet.
>
> You could allocate here the data buffer i.e. via a page allocator and
> at rx processing time use build_skb() on top of such data buffer.
>
> I understand it's probably such refactor would be painful at this
> point, but you should consider it as a follow-up.
Thank you for your advice.
We're actually focusing on optimizing performance now.
But according to the test results, the current performance bottleneck
is not in the driver or protocol stack.
This driver is a PCIe driver, the device is on the BMC side.
All data transfer needs to pass through the PCIe DMA.
As a result, the maximum bandwidth cannot be reached.
Currently, we have a special task to track and optimize performance.
Your suggestion is reasonable and we will adopt it when optimizing performance.
If possible, we do not want to modify this patch for the time being.
Because patch set has been modified many times,
we hope it can be accepted as soon as possible if there are no other serious problems.
We have some other features waiting to be sent.
Some patches will be sent in the future to optimize performance.
Thank you.
>
> Side note: the above always uses the maximum MTU for the packet size,
> if the device supports jumbo frames (8Kb size packets), it will
> produce quite bad layout for the incoming packets... Is the device
> able to use multiple buffers for the incoming packets?
In fact, jumbo frames are not supported in device, and the maximum MTU is 4Kb.
Thanks,
Jijie Shao
Powered by blists - more mailing lists