lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1b498052994c4ed48de45b5af9a490b6@realtek.com>
Date: Thu, 15 Jan 2026 11:42:20 +0000
From: Hayes Wang <hayeswang@...ltek.com>
To: lu lu <insyelu@...il.com>
CC: "andrew+netdev@...n.ch" <andrew+netdev@...n.ch>,
        "davem@...emloft.net"
	<davem@...emloft.net>,
        nic_swsd <nic_swsd@...ltek.com>, "tiwai@...e.de"
	<tiwai@...e.de>,
        "linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] net: usb: r8152: fix transmit queue timeout

lu lu <insyelu@...il.com>
> Sent: Thursday, January 15, 2026 9:37 AM
[...]
> To reduce the performance impact on the tx_tl tasklet’s transmit path,
> netif_trans_update() has been moved from the main transmit path into
> write_bulk_callback (the USB transfer completion callback).
> The main considerations are as follows:
> 1. Reduce frequent tasklet overhead
> netif_trans_update() is invoked frequently under high-throughput
> conditions. Calling it directly in the main transmit path continuously
> introduces a small but noticeable CPU overhead, degrading the
> scheduling efficiency of the tx_tl tasklet.
> 2. Move non-critical operations out of the critical path
> By deferring netif_trans_update() to the USB callback thread—and
> ensuring it executes after tasklet_schedule(&tp->tx_tl)—the timestamp
> update is removed from the critical transmit scheduling path, further
> reducing the burden on tx_tl.

Excuse me, I do not fully understand the reasoning above.
It seems that this change merely shifts the time (or effort) from tx_tl to the TX completion callback.

While the intention is to make tx_tl run faster, this also delays the completion of the callback,
which in turn may delay both the next callback execution and the next scheduling of tx_tl.

From this perspective, it is unclear what is actually being saved.

Have you observed a measurable difference based on testing?

If you want to reduce the frequency of calling netif_trans_update(),
you could try something like the following. This way,
netif_trans_update() would not be executed on every transmission.

--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -2432,9 +2432,12 @@ static int r8152_tx_agg_fill(struct r8152 *tp, struct tx_agg *agg)

        netif_tx_lock(tp->netdev);

-       if (netif_queue_stopped(tp->netdev) &&
-           skb_queue_len(&tp->tx_queue) < tp->tx_qlen)
+       if (netif_queue_stopped(tp->netdev)) {
+           if (skb_queue_len(&tp->tx_queue) < tp->tx_qlen)
                netif_wake_queue(tp->netdev);
+           else
+               netif_trans_update(tp->netdev);
+       }

        netif_tx_unlock(tp->netdev);

Best Regards,
Hayes

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ