[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iL6MjvOc8qEQpeQJPLX0Y3X0HmqNcmgHL4RzfcijPim5w@mail.gmail.com>
Date: Tue, 4 Nov 2025 09:00:14 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Simon Schippers <simon.schippers@...dortmund.de>
Cc: oneukum@...e.com, andrew+netdev@...n.ch, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com, netdev@...r.kernel.org,
linux-usb@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v1 1/1] usbnet: Add support for Byte Queue Limits (BQL)
On Tue, Nov 4, 2025 at 8:14 AM Simon Schippers
<simon.schippers@...dortmund.de> wrote:
>
> The usbnet driver currently relies on fixed transmit queue lengths, which
> can lead to bufferbloat and large latency spikes under load -
> particularly with cellular modems.
> This patch adds support for Byte Queue Limits (BQL) to dynamically manage
> the transmit queue size and reduce latency without sacrificing
> throughput.
>
> Testing was performed on various devices using the usbnet driver for
> packet transmission:
>
> - DELOCK 66045: USB3 to 2.5 GbE adapter (ax88179_178a)
> - DELOCK 61969: USB2 to 1 GbE adapter (asix)
> - Quectel RM520: 5G modem (qmi_wwan)
> - USB2 Android tethering (cdc_ncm)
>
> No performance degradation was observed for iperf3 TCP or UDP traffic,
> while latency for a prioritized ping application was significantly
> reduced. For example, using the USB3 to 2.5 GbE adapter, which was fully
> utilized by iperf3 UDP traffic, the prioritized ping was improved from
> 1.6 ms to 0.6 ms. With the same setup but with a 100 Mbit/s Ethernet
> connection, the prioritized ping was improved from 35 ms to 5 ms.
>
> Signed-off-by: Simon Schippers <simon.schippers@...dortmund.de>
> ---
> drivers/net/usb/usbnet.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
> index 62a85dbad31a..1994f03a78ad 100644
> --- a/drivers/net/usb/usbnet.c
> +++ b/drivers/net/usb/usbnet.c
> @@ -831,6 +831,7 @@ int usbnet_stop(struct net_device *net)
>
> clear_bit(EVENT_DEV_OPEN, &dev->flags);
> netif_stop_queue (net);
> + netdev_reset_queue(net);
>
> netif_info(dev, ifdown, dev->net,
> "stop stats: rx/tx %lu/%lu, errs %lu/%lu\n",
> @@ -939,6 +940,7 @@ int usbnet_open(struct net_device *net)
> }
>
> set_bit(EVENT_DEV_OPEN, &dev->flags);
> + netdev_reset_queue(net);
> netif_start_queue (net);
> netif_info(dev, ifup, dev->net,
> "open: enable queueing (rx %d, tx %d) mtu %d %s framing\n",
> @@ -1500,6 +1502,7 @@ netdev_tx_t usbnet_start_xmit(struct sk_buff *skb, struct net_device *net)
> case 0:
> netif_trans_update(net);
> __usbnet_queue_skb(&dev->txq, skb, tx_start);
> + netdev_sent_queue(net, skb->len);
> if (dev->txq.qlen >= TX_QLEN (dev))
> netif_stop_queue (net);
> }
> @@ -1563,6 +1566,7 @@ static inline void usb_free_skb(struct sk_buff *skb)
> static void usbnet_bh(struct timer_list *t)
> {
> struct usbnet *dev = timer_container_of(dev, t, delay);
> + unsigned int bytes_compl = 0, pkts_compl = 0;
> struct sk_buff *skb;
> struct skb_data *entry;
>
> @@ -1574,6 +1578,8 @@ static void usbnet_bh(struct timer_list *t)
> usb_free_skb(skb);
> continue;
> case tx_done:
> + bytes_compl += skb->len;
> + pkts_compl++;
> kfree(entry->urb->sg);
> fallthrough;
> case rx_cleanup:
> @@ -1584,6 +1590,8 @@ static void usbnet_bh(struct timer_list *t)
> }
> }
>
> + netdev_completed_queue(dev->net, pkts_compl, bytes_compl);
> +
> /* restart RX again after disabling due to high error rate */
> clear_bit(EVENT_RX_KILL, &dev->flags);
>
I think this is racy. usbnet_bh() can run from two different contexts,
at the same time (from two cpus)
1) From process context :
usbnet_bh_work()
2) From a timer. (dev->delay)
To use BQL, you will need to add mutual exclusion.
Powered by blists - more mailing lists