lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 18 Jun 2024 08:53:42 +0800
From: Jason Wang <jasowang@...hat.com>
To: Jiri Pirko <jiri@...nulli.us>
Cc: netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com, 
	kuba@...nel.org, pabeni@...hat.com, mst@...hat.com, 
	xuanzhuo@...ux.alibaba.com, virtualization@...ts.linux.dev, ast@...nel.org, 
	daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com, 
	dave.taht@...il.com, kerneljasonxing@...il.com, hengqi@...ux.alibaba.com
Subject: Re: [PATCH net-next v2] virtio_net: add support for Byte Queue Limits

On Mon, Jun 17, 2024 at 5:18 PM Jiri Pirko <jiri@...nulli.us> wrote:
>
> Mon, Jun 17, 2024 at 04:34:26AM CEST, jasowang@...hat.com wrote:
> >On Thu, Jun 13, 2024 at 1:09 AM Jiri Pirko <jiri@...nulli.us> wrote:
> >>
> >> From: Jiri Pirko <jiri@...dia.com>
> >>
> >> Add support for Byte Queue Limits (BQL).
> >>
> >> Tested on qemu emulated virtio_net device with 1, 2 and 4 queues.
> >> Tested with fq_codel and pfifo_fast. Super netperf with 50 threads is
> >> running in background. Netperf TCP_RR results:
> >>
> >> NOBQL FQC 1q:  159.56  159.33  158.50  154.31    agv: 157.925
> >> NOBQL FQC 2q:  184.64  184.96  174.73  174.15    agv: 179.62
> >> NOBQL FQC 4q:  994.46  441.96  416.50  499.56    agv: 588.12
> >> NOBQL PFF 1q:  148.68  148.92  145.95  149.48    agv: 148.2575
> >> NOBQL PFF 2q:  171.86  171.20  170.42  169.42    agv: 170.725
> >> NOBQL PFF 4q: 1505.23 1137.23 2488.70 3507.99    agv: 2159.7875
> >>   BQL FQC 1q: 1332.80 1297.97 1351.41 1147.57    agv: 1282.4375
> >>   BQL FQC 2q:  768.30  817.72  864.43  974.40    agv: 856.2125
> >>   BQL FQC 4q:  945.66  942.68  878.51  822.82    agv: 897.4175
> >>   BQL PFF 1q:  149.69  151.49  149.40  147.47    agv: 149.5125
> >>   BQL PFF 2q: 2059.32  798.74 1844.12  381.80    agv: 1270.995
> >>   BQL PFF 4q: 1871.98 4420.02 4916.59 13268.16   agv: 6119.1875
> >>
> >> Signed-off-by: Jiri Pirko <jiri@...dia.com>
> >> ---
> >> v1->v2:
> >> - moved netdev_tx_completed_queue() call into __free_old_xmit(),
> >>   propagate use_napi flag to __free_old_xmit() and only call
> >>   netdev_tx_completed_queue() in case it is true
> >> - added forgotten call to netdev_tx_reset_queue()
> >> - fixed stats for xdp packets
> >> - fixed bql accounting when __free_old_xmit() is called from xdp path
> >> - handle the !use_napi case in start_xmit() kick section
> >> ---
> >>  drivers/net/virtio_net.c | 50 +++++++++++++++++++++++++---------------
> >>  1 file changed, 32 insertions(+), 18 deletions(-)
> >>
> >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >> index 61a57d134544..5863c663ccab 100644
> >> --- a/drivers/net/virtio_net.c
> >> +++ b/drivers/net/virtio_net.c
> >> @@ -84,7 +84,9 @@ struct virtnet_stat_desc {
> >>
> >>  struct virtnet_sq_free_stats {
> >>         u64 packets;
> >> +       u64 xdp_packets;
> >>         u64 bytes;
> >> +       u64 xdp_bytes;
> >>  };
> >>
> >>  struct virtnet_sq_stats {
> >> @@ -506,29 +508,33 @@ static struct xdp_frame *ptr_to_xdp(void *ptr)
> >>         return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> >>  }
> >>
> >> -static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> >> +static void __free_old_xmit(struct send_queue *sq, struct netdev_queue *txq,
> >> +                           bool in_napi, bool use_napi,
> >>                             struct virtnet_sq_free_stats *stats)
> >>  {
> >>         unsigned int len;
> >>         void *ptr;
> >>
> >>         while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> >> -               ++stats->packets;
> >> -
> >>                 if (!is_xdp_frame(ptr)) {
> >>                         struct sk_buff *skb = ptr;
> >>
> >>                         pr_debug("Sent skb %p\n", skb);
> >>
> >> +                       stats->packets++;
> >>                         stats->bytes += skb->len;
> >>                         napi_consume_skb(skb, in_napi);
> >>                 } else {
> >>                         struct xdp_frame *frame = ptr_to_xdp(ptr);
> >>
> >> -                       stats->bytes += xdp_get_frame_len(frame);
> >> +                       stats->xdp_packets++;
> >> +                       stats->xdp_bytes += xdp_get_frame_len(frame);
> >>                         xdp_return_frame(frame);
> >>                 }
> >>         }
> >> +       if (use_napi)
> >> +               netdev_tx_completed_queue(txq, stats->packets, stats->bytes);
> >> +
> >>  }
> >
> >I wonder if this works correctly, for example NAPI could be enabled
> >after queued but before sent. So __netdev_tx_sent_queue() is not
> >called before.
>
> How is that possible? Napi weight can't change when link is up. Or am I
> missing something?

Something like this:

1) packet were queued
2) if down
3) enable NAPI
4) if up
5) packet were sent

?

Thanks

>
> >
> >Thanks
> >
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ