[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <323a325d-4eec-7a7a-efda-c0576a5b04ca@redhat.com>
Date: Fri, 9 Mar 2018 16:03:28 +0800
From: Jason Wang <jasowang@...hat.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org,
BjörnTöpel <bjorn.topel@...el.com>,
magnus.karlsson@...el.com
Cc: eugenia@...lanox.com, John Fastabend <john.fastabend@...il.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [bpf-next V2 PATCH 07/15] virtio_net: convert to use generic
xdp_frame and xdp_return_frame API
On 2018年03月08日 21:08, Jesper Dangaard Brouer wrote:
> The virtio_net driver assumes XDP frames are always released based on
> page refcnt (via put_page). Thus, is only queues the XDP data pointer
> address and uses virt_to_head_page() to retrieve struct page.
>
> Use the XDP return API to get away from such assumptions. Instead
> queue an xdp_frame, which allow us to use the xdp_return_frame API,
> when releasing the frame.
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> ---
> drivers/net/virtio_net.c | 31 +++++++++++++++++++++----------
> 1 file changed, 21 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 23374603e4d9..6c4220450506 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -419,30 +419,41 @@ static bool __virtnet_xdp_xmit(struct virtnet_info *vi,
> struct xdp_buff *xdp)
> {
> struct virtio_net_hdr_mrg_rxbuf *hdr;
> - unsigned int len;
> + struct xdp_frame *xdpf, *xdpf_sent;
> struct send_queue *sq;
> + unsigned int len;
> unsigned int qp;
> - void *xdp_sent;
> int err;
>
> qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
> sq = &vi->sq[qp];
>
> /* Free up any pending old buffers before queueing new ones. */
> - while ((xdp_sent = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> - struct page *sent_page = virt_to_head_page(xdp_sent);
> + while ((xdpf_sent = virtqueue_get_buf(sq->vq, &len)) != NULL)
> + xdp_return_frame(xdpf_sent->data, &xdpf_sent->mem);
>
> - put_page(sent_page);
> - }
> + xdpf = convert_to_xdp_frame(xdp);
> + if (unlikely(!xdpf))
> + return -EOVERFLOW;
> +
> + /* virtqueue want to use data area in-front of packet */
> + if (unlikely(xdpf->metasize > 0))
> + return -EOPNOTSUPP;
> +
> + if (unlikely(xdpf->headroom < vi->hdr_len))
> + return -EOVERFLOW;
I think this need another independent patch. For now we can simply drop
the packet, but we probably need to think more to address it completely,
allocate header part either dynamically or statically.
>
> - xdp->data -= vi->hdr_len;
> + /* Make room for virtqueue hdr (also change xdpf->headroom?) */
> + xdpf->data -= vi->hdr_len;
> /* Zero header and leave csum up to XDP layers */
> - hdr = xdp->data;
> + hdr = xdpf->data;
> memset(hdr, 0, vi->hdr_len);
> + hdr->hdr.hdr_len = xdpf->len; /* Q: is this needed? */
Maybe another patch but not a must, hdr_len is hint for the linear part
of skb used in host. Backend implementation may simply ignore this value.
Thanks
> + xdpf->len += vi->hdr_len;
>
> - sg_init_one(sq->sg, xdp->data, xdp->data_end - xdp->data);
> + sg_init_one(sq->sg, xdpf->data, xdpf->len);
>
> - err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdp->data, GFP_ATOMIC);
> + err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);
> if (unlikely(err))
> return false; /* Caller handle free/refcnt */
>
>
Powered by blists - more mailing lists