[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <992eb402-43f9-0289-f95c-b41cb17f2e59@huawei.com>
Date: Mon, 29 May 2023 09:33:46 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Liang Chen <liangchen.linux@...il.com>, <jasowang@...hat.com>,
<mst@...hat.com>
CC: <virtualization@...ts.linux-foundation.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <xuanzhuo@...ux.alibaba.com>,
<kuba@...nel.org>, <edumazet@...gle.com>, <davem@...emloft.net>,
<pabeni@...hat.com>, <alexander.duyck@...il.com>
Subject: Re: [PATCH net-next 3/5] virtio_net: Add page pool fragmentation
support
On 2023/5/26 13:46, Liang Chen wrote:
...
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 99c0ca0c1781..ac40b8c66c59 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -32,7 +32,9 @@ module_param(gso, bool, 0444);
> module_param(napi_tx, bool, 0644);
>
> static bool page_pool_enabled;
> +static bool page_pool_frag;
> module_param(page_pool_enabled, bool, 0400);
> +module_param(page_pool_frag, bool, 0400);
The below patchset unifies the frag and non-frag page for
page_pool_alloc_frag() API, perhaps it would simplify the
driver's support of page pool.
https://patchwork.kernel.org/project/netdevbpf/cover/20230526092616.40355-1-linyunsheng@huawei.com/
>
...
> @@ -1769,13 +1788,29 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
> */
> len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room);
> if (rq->page_pool) {
> - struct page *page;
> + if (rq->page_pool->p.flags & PP_FLAG_PAGE_FRAG) {
> + if (unlikely(!page_pool_dev_alloc_frag(rq->page_pool,
> + &pp_frag_offset, len + room)))
> + return -ENOMEM;
> + buf = (char *)page_address(rq->page_pool->frag_page) +
> + pp_frag_offset;
> + buf += headroom; /* advance address leaving hole at front of pkt */
> + hole = (PAGE_SIZE << rq->page_pool->p.order)
> + - rq->page_pool->frag_offset;
> + if (hole < len + room) {
> + if (!headroom)
> + len += hole;
> + rq->page_pool->frag_offset += hole;
Is there any reason why the driver need to be aware of page_pool->frag_offset?
Isn't the page_pool_dev_alloc_frag() will drain the last page for you when
page_pool_dev_alloc_frag() is called with size being 'len + room' later?
One case I can think of needing this is to have an accurate truesize report
for skb, but I am not sure it matters that much as 'struct page_frag_cache'
and 'page_frag' implementation both have a similar problem.
> + }
> + } else {
> + struct page *page;
>
> - page = page_pool_dev_alloc_pages(rq->page_pool);
> - if (unlikely(!page))
> - return -ENOMEM;
> - buf = (char *)page_address(page);
> - buf += headroom; /* advance address leaving hole at front of pkt */
> + page = page_pool_dev_alloc_pages(rq->page_pool);
> + if (unlikely(!page))
> + return -ENOMEM;
> + buf = (char *)page_address(page);
> + buf += headroom; /* advance address leaving hole at front of pkt */
> + }
> } else {
> if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp)))
> return -ENOMEM;
> @@ -3800,13 +3835,16 @@ static void virtnet_alloc_page_pool(struct receive_queue *rq)
> struct virtio_device *vdev = rq->vq->vdev;
>
> struct page_pool_params pp_params = {
> - .order = 0,
> + .order = page_pool_frag ? SKB_FRAG_PAGE_ORDER : 0,
> .pool_size = rq->vq->num_max,
If it using order SKB_FRAG_PAGE_ORDER page, perhaps pool_size does
not have to be rq->vq->num_max? Even for order 0 page, perhaps the
pool_size does not need to be as big as rq->vq->num_max?
> .nid = dev_to_node(vdev->dev.parent),
> .dev = vdev->dev.parent,
> .offset = 0,
> };
>
> + if (page_pool_frag)
> + pp_params.flags |= PP_FLAG_PAGE_FRAG;
> +
> rq->page_pool = page_pool_create(&pp_params);
> if (IS_ERR(rq->page_pool)) {
> dev_warn(&vdev->dev, "page pool creation failed: %ld\n",
>
Powered by blists - more mailing lists