[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250923072606.1178-1-gongfan1@huawei.com>
Date: Tue, 23 Sep 2025 15:26:05 +0800
From: Fan Gong <gongfan1@...wei.com>
To: <dan.carpenter@...aro.org>
CC: <andrew+netdev@...n.ch>, <christophe.jaillet@...adoo.fr>,
<corbet@....net>, <davem@...emloft.net>, <edumazet@...gle.com>,
<gongfan1@...wei.com>, <guoxin09@...wei.com>, <gur.stavi@...wei.com>,
<helgaas@...nel.org>, <horms@...nel.org>, <jdamato@...tly.com>,
<kuba@...nel.org>, <lee@...ger.us>, <linux-doc@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <luosifu@...wei.com>,
<luoyang82@...artners.com>, <meny.yossefi@...wei.com>, <mpe@...erman.id.au>,
<netdev@...r.kernel.org>, <pabeni@...hat.com>,
<przemyslaw.kitszel@...el.com>, <shenchenyang1@...ilicon.com>,
<shijing34@...wei.com>, <sumang@...vell.com>, <vadim.fedorenko@...ux.dev>,
<wulike1@...wei.com>, <zhoushuai28@...wei.com>, <zhuyikai1@...artners.com>
Subject: Re: [PATCH net-next v06 08/14] hinic3: Queue pair resource initialization
On 9/18/2025 3:38 PM, Dan Carpenter wrote:
> On Fri, Sep 12, 2025 at 02:28:25PM +0800, Fan Gong wrote:
>> @@ -102,6 +127,41 @@ static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq)
>> return i;
>> }
>>
>> +static u32 hinic3_alloc_rx_buffers(struct hinic3_dyna_rxq_res *rqres,
>> + u32 rq_depth, u16 buf_len)
>> +{
>> + u32 free_wqebbs = rq_depth - 1;
>
> Why is there this "- 1" here. Why do we not allocate the last page so
> it's 1 page for each rq_depth?
>
> regards,
> dan carpenter
>
Thanks for your comment. Sorry for replying so late.
This is queue design. PI means the next queue place that can be filled.
When PI equals to CI in HW, it means the queue is full.
"hinic3_alloc_rx_buffers" is to replenish rx buffer. Then driver informs
HW that there are new idle wqe HW can use instead of informing that the
queue is full. Although driver can allocate page of depth quantity(e.g.
depth is 1024, CI is equals to 0, we allocate 1024, then PI is also equals
to zero), driver allocates for one less to avoid triggering the "full"
situation mentioned above.
Powered by blists - more mailing lists