lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iL9XR=NA=_Bm-CkQh7KqOgC4f+pjCp+AiZ8B7zeiczcsA@mail.gmail.com>
Date: Sun, 9 Nov 2025 12:29:12 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Jonas Köppeler <j.koeppeler@...berlin.de>
Cc: Toke Høiland-Jørgensen <toke@...hat.com>, 
	"David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, 
	Simon Horman <horms@...nel.org>, Jamal Hadi Salim <jhs@...atatu.com>, 
	Cong Wang <xiyou.wangcong@...il.com>, Jiri Pirko <jiri@...nulli.us>, 
	Kuniyuki Iwashima <kuniyu@...gle.com>, Willem de Bruijn <willemb@...gle.com>, netdev@...r.kernel.org, 
	eric.dumazet@...il.com
Subject: Re: [PATCH v1 net-next 5/5] net: dev_queue_xmit() llist adoption

On Sun, Nov 9, 2025 at 12:18 PM Eric Dumazet <edumazet@...gle.com> wrote:
>

> I think the issue is really about TCQ_F_ONETXQUEUE :

dequeue_skb() can only dequeue 8 packets at a time, then has to
release the qdisc spinlock.

>
>
> Perhaps we should not accept q->limit packets in the ll_list, but a
> much smaller limit.

I will test something like this

diff --git a/net/core/dev.c b/net/core/dev.c
index 69515edd17bc6a157046f31b3dd343a59ae192ab..e4187e2ca6324781216c073de2ec20626119327a
100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4185,8 +4185,12 @@ static inline int __dev_xmit_skb(struct sk_buff
*skb, struct Qdisc *q,
        first_n = READ_ONCE(q->defer_list.first);
        do {
                if (first_n && !defer_count) {
+                       unsigned long total;
+
                        defer_count = atomic_long_inc_return(&q->defer_count);
-                       if (unlikely(defer_count > q->limit)) {
+                       total = defer_count + READ_ONCE(q->q.qlen);
+
+                       if (unlikely(defer_count > 256 || total >
READ_ONCE(q->limit))) {
                                kfree_skb_reason(skb,
SKB_DROP_REASON_QDISC_DROP);
                                return NET_XMIT_DROP;
                        }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ