[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEuJ1LXg+WOryC9Fnk3NRJVhvzy+h8p5-fJFWUu1z8Yqtg@mail.gmail.com>
Date: Thu, 5 Feb 2026 11:54:51 +0800
From: Jason Wang <jasowang@...hat.com>
To: Vishwanath Seshagiri <vishs@...a.com>
Cc: "Michael S . Tsirkin" <mst@...hat.com>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio Pérez <eperezma@...hat.com>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
David Wei <dw@...idwei.uk>, Matteo Croce <technoboy85@...il.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>, netdev@...r.kernel.org,
virtualization@...ts.linux.dev, linux-kernel@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [PATCH net-next v4 1/2] virtio_net: add page_pool support for
buffer allocation
On Thu, Feb 5, 2026 at 3:36 AM Vishwanath Seshagiri <vishs@...a.com> wrote:
>
> Use page_pool for RX buffer allocation in mergeable and small buffer
> modes to enable page recycling and avoid repeated page allocator calls.
> skb_mark_for_recycle() enables page reuse in the network stack.
>
> Big packets mode is unchanged because it uses page->private for linked
> list chaining of multiple pages per buffer, which conflicts with
> page_pool's internal use of page->private.
>
> Implement conditional DMA premapping using virtqueue_dma_dev():
> - When non-NULL (vhost, virtio-pci): use PP_FLAG_DMA_MAP with page_pool
> handling DMA mapping, submit via virtqueue_add_inbuf_premapped()
> - When NULL (VDUSE, direct physical): page_pool handles allocation only,
> submit via virtqueue_add_inbuf_ctx()
>
> This preserves the DMA premapping optimization from commit 31f3cd4e5756b
> ("virtio-net: rq submits premapped per-buffer") while adding page_pool
> support as a prerequisite for future zero-copy features (devmem TCP,
> io_uring ZCRX).
>
> Page pools are created in probe and destroyed in remove (not open/close),
> following existing driver behavior where RX buffers remain in virtqueues
> across interface state changes.
>
> Signed-off-by: Vishwanath Seshagiri <vishs@...a.com>
> ---
> drivers/net/Kconfig | 1 +
> drivers/net/virtio_net.c | 351 ++++++++++++++++++++++-----------------
> 2 files changed, 201 insertions(+), 151 deletions(-)
>
Looks good overall, just one spot.
> -static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
> -{
> - struct virtnet_info *vi = rq->vq->vdev->priv;
> - struct virtnet_rq_dma *dma;
> - dma_addr_t addr;
> - u32 offset;
> - void *head;
> -
> - BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
> -
> - head = page_address(rq->alloc_frag.page);
> -
> - offset = buf - head;
> -
> - dma = head;
> -
> - addr = dma->addr - sizeof(*dma) + offset;
> -
> - sg_init_table(rq->sg, 1);
> - sg_fill_dma(rq->sg, addr, len);
> -}
> -
> -static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
> -{
> - struct page_frag *alloc_frag = &rq->alloc_frag;
> - struct virtnet_info *vi = rq->vq->vdev->priv;
> - struct virtnet_rq_dma *dma;
> - void *buf, *head;
> - dma_addr_t addr;
>
> BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
>
> - head = page_address(alloc_frag->page);
> -
> - dma = head;
> -
> - /* new pages */
> - if (!alloc_frag->offset) {
> - if (rq->last_dma) {
> - /* Now, the new page is allocated, the last dma
> - * will not be used. So the dma can be unmapped
> - * if the ref is 0.
> - */
> - virtnet_rq_unmap(rq, rq->last_dma, 0);
> - rq->last_dma = NULL;
> - }
> -
> - dma->len = alloc_frag->size - sizeof(*dma);
> -
> - addr = virtqueue_map_single_attrs(rq->vq, dma + 1,
> - dma->len, DMA_FROM_DEVICE, 0);
> - if (virtqueue_map_mapping_error(rq->vq, addr))
> - return NULL;
> -
> - dma->addr = addr;
> - dma->need_sync = virtqueue_map_need_sync(rq->vq, addr);
> -
> - /* Add a reference to dma to prevent the entire dma from
> - * being released during error handling. This reference
> - * will be freed after the pages are no longer used.
> - */
> - get_page(alloc_frag->page);
> - dma->ref = 1;
> - alloc_frag->offset = sizeof(*dma);
> -
> - rq->last_dma = dma;
> - }
> -
> - ++dma->ref;
This patch still uses virtnet_rq_unmap() for free_receive_page_frags()
which looks like a bug.
Thanks
Powered by blists - more mailing lists