[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef5266a0-6d7a-4327-be7c-11f46f8d1074@huawei.com>
Date: Mon, 30 Dec 2024 17:18:39 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: John Daley <johndale@...co.com>, <benve@...co.com>, <satishkh@...co.com>,
<andrew+netdev@...n.ch>, <davem@...emloft.net>, <edumazet@...gle.com>,
<kuba@...nel.org>, <pabeni@...hat.com>, <netdev@...r.kernel.org>
CC: Nelson Escobar <neescoba@...co.com>
Subject: Re: [PATCH net-next v3 4/6] enic: Use the Page Pool API for RX when
MTU is less than page size
On 2024/12/28 8:10, John Daley wrote:
> +void enic_rq_free_page(struct vnic_rq *vrq, struct vnic_rq_buf *buf)
> +{
> + struct enic *enic = vnic_dev_priv(vrq->vdev);
> + struct enic_rq *rq = &enic->rq[vrq->index];
> +
> + if (!buf->os_buf)
> + return;
> +
> + page_pool_put_page(rq->pool, (struct page *)buf->os_buf,
> + get_max_pkt_len(enic), true);
It seems the above has a similar problem of not using
page_pool_put_full_page() when page_pool_dev_alloc() API is used and
page_pool is created with PP_FLAG_DMA_SYNC_DEV flags.
It seems like a common mistake that a WARN_ON might be needed to catch
this kind of problem.
https://lore.kernel.org/netdev/89d7ce83-cc1d-4791-87b5-6f7af29a031d@huawei.com/
> + buf->os_buf = NULL;
> +}
> +
> +int enic_rq_alloc_page(struct vnic_rq *vrq)
> +{
> + struct enic *enic = vnic_dev_priv(vrq->vdev);
> + struct enic_rq *rq = &enic->rq[vrq->index];
> + struct enic_rq_stats *rqstats = &rq->stats;
> + struct vnic_rq_buf *buf = vrq->to_use;
> + dma_addr_t dma_addr;
> + struct page *page;
> + unsigned int offset = 0;
> + unsigned int len;
> + unsigned int truesize;
> +
> + len = get_max_pkt_len(enic);
> + truesize = len;
> +
> + if (buf->os_buf) {
> + dma_addr = buf->dma_addr;
> + } else {
> + page = page_pool_dev_alloc(rq->pool, &offset, &truesize);
> + if (unlikely(!page)) {
> + rqstats->pp_alloc_error++;
> + return -ENOMEM;
> + }
> + buf->os_buf = (void *)page;
> + buf->offset = offset;
> + buf->truesize = truesize;
> + dma_addr = page_pool_get_dma_addr(page) + offset;
> + }
> +
> + enic_queue_rq_desc(vrq, buf->os_buf, dma_addr, len);
> +
> + return 0;
> +}
Powered by blists - more mailing lists