[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210115174752.3d2e8109@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Fri, 15 Jan 2021 17:47:52 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Cc: netdev@...r.kernel.org, "Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
virtualization@...ts.linux-foundation.org, bpf@...r.kernel.org,
dust.li@...ux.alibaba.com
Subject: Re: [PATCH netdev] virtio-net: support XDP_TX when not more queues
On Wed, 13 Jan 2021 16:08:57 +0800 Xuan Zhuo wrote:
> The number of queues implemented by many virtio backends is limited,
> especially some machines have a large number of CPUs. In this case, it
> is often impossible to allocate a separate queue for XDP_TX.
>
> This patch allows XDP_TX to run by reuse the existing SQ with
> __netif_tx_lock() hold when there are not enough queues.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> Reviewed-by: Dust Li <dust.li@...ux.alibaba.com>
Since reviews are not coming in let me share some of mine.
nit: please put [PATCH net-next] not [PATCH netdev]
> -static struct send_queue *virtnet_xdp_sq(struct virtnet_info *vi)
> +static struct send_queue *virtnet_get_xdp_sq(struct virtnet_info *vi)
> {
> unsigned int qp;
> + struct netdev_queue *txq;
nit: please order variable declaration lines longest to shortest
> +
> + if (vi->curr_queue_pairs > nr_cpu_ids) {
> + qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
> + } else {
> + qp = smp_processor_id() % vi->curr_queue_pairs;
> + txq = netdev_get_tx_queue(vi->dev, qp);
> + __netif_tx_lock(txq, raw_smp_processor_id());
> + }
>
> - qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
> return &vi->sq[qp];
> }
>
> +static void virtnet_put_xdp_sq(struct virtnet_info *vi)
> +{
> + unsigned int qp;
> + struct netdev_queue *txq;
nit: longest to shortest
> +
> + if (vi->curr_queue_pairs <= nr_cpu_ids) {
> + qp = smp_processor_id() % vi->curr_queue_pairs;
Feels a little wasteful to do the modulo calculation twice per packet.
> + txq = netdev_get_tx_queue(vi->dev, qp);
> + __netif_tx_unlock(txq);
> + }
> +}
> + vi->xdp_enabled = false;
> if (prog) {
> for (i = 0; i < vi->max_queue_pairs; i++) {
> rcu_assign_pointer(vi->rq[i].xdp_prog, prog);
> if (i == 0 && !old_prog)
> virtnet_clear_guest_offloads(vi);
> }
> + vi->xdp_enabled = true;
is xdp_enabled really needed? can't we do the headroom calculation
based on the program pointer being not NULL? Either way xdp_enabled
should not temporarily switch true -> false -> true when program
is swapped.
> }
>
> for (i = 0; i < vi->max_queue_pairs; i++) {
Powered by blists - more mailing lists