[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200226013336-mutt-send-email-mst@kernel.org>
Date: Wed, 26 Feb 2020 01:43:21 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: David Ahern <dsahern@...nel.org>
Cc: netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
David Ahern <dahern@...italocean.com>,
Jason Wang <jasowang@...hat.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for
using XDP
On Tue, Feb 25, 2020 at 05:57:44PM -0700, David Ahern wrote:
> From: David Ahern <dahern@...italocean.com>
>
> virtio_net currently requires extra queues to install an XDP program,
> with the rule being twice as many queues as vcpus. From a host
> perspective this means the VM needs to have 2*vcpus vhost threads
> for each guest NIC for which XDP is to be allowed. For example, a
> 16 vcpu VM with 2 tap devices needs 64 vhost threads.
>
> The extra queues are only needed in case an XDP program wants to
> return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
> additional queues. Relax the queue requirement and allow XDP
> functionality based on resources. If an XDP program is loaded and
> there are insufficient queues, then return a warning to the user
> and if a program returns XDP_TX just drop the packet. This allows
> the use of the rest of the XDP functionality to work without
> putting an unreasonable burden on the host.
>
> Cc: Jason Wang <jasowang@...hat.com>
> Cc: Michael S. Tsirkin <mst@...hat.com>
> Signed-off-by: David Ahern <dahern@...italocean.com>
It isn't particularly easy for userspace to detect packets
are dropped. If there's a need for a limited XDP with
limited resources, IMHO it's better for userspace to
declare this to the driver.
> ---
> drivers/net/virtio_net.c | 14 ++++++++++----
> 1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 2fe7a3188282..2f4c5b2e674d 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -190,6 +190,8 @@ struct virtnet_info {
> /* # of XDP queue pairs currently used by the driver */
> u16 xdp_queue_pairs;
>
> + bool can_do_xdp_tx;
> +
> /* I like... big packets and I cannot lie! */
> bool big_packets;
>
> @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
> len = xdp.data_end - xdp.data;
> break;
> case XDP_TX:
> + if (!vi->can_do_xdp_tx)
> + goto err_xdp;
> stats->xdp_tx++;
> xdpf = convert_to_xdp_frame(&xdp);
> if (unlikely(!xdpf))
> @@ -870,6 +874,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> }
> break;
> case XDP_TX:
> + if (!vi->can_do_xdp_tx)
> + goto err_xdp;
> stats->xdp_tx++;
> xdpf = convert_to_xdp_frame(&xdp);
> if (unlikely(!xdpf))
> @@ -2435,10 +2441,10 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
>
> /* XDP requires extra queues for XDP_TX */
> if (curr_qp + xdp_qp > vi->max_queue_pairs) {
> - NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available");
> - netdev_warn(dev, "request %i queues but max is %i\n",
> - curr_qp + xdp_qp, vi->max_queue_pairs);
> - return -ENOMEM;
> + NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available; XDP_TX will not be allowed");
> + vi->can_do_xdp_tx = false;
> + } else {
> + vi->can_do_xdp_tx = true;
> }
>
> old_prog = rtnl_dereference(vi->rq[0].xdp_prog);
> --
> 2.17.1
Powered by blists - more mailing lists