[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260203214716.880853-1-edumazet@google.com>
Date: Tue, 3 Feb 2026 21:47:16 +0000
From: Eric Dumazet <edumazet@...gle.com>
To: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: Simon Horman <horms@...nel.org>, Jamal Hadi Salim <jhs@...atatu.com>, Jiri Pirko <jiri@...nulli.us>,
netdev@...r.kernel.org, eric.dumazet@...il.com,
Eric Dumazet <edumazet@...gle.com>
Subject: [PATCH net-next] net_sched: sch_fq: tweak unlikely() hints in fq_dequeue()
After 076433bd78d7 ("net_sched: sch_fq: add fast path
for mostly idle qdisc") we need to remove one unlikely()
because q->internal holds all the fast path packets.
skb = fq_peek(&q->internal);
if (unlikely(skb)) {
q->internal.qlen--;
Calling INET_ECN_set_ce() is very unlikely.
These changes allow fq_dequeue_skb() to be (auto)inlined,
thus making fq_dequeue() faster.
$ scripts/bloat-o-meter -t vmlinux.0 vmlinux
add/remove: 2/2 grow/shrink: 0/1 up/down: 283/-269 (14)
Function old new delta
INET_ECN_set_ce - 267 +267
__pfx_INET_ECN_set_ce - 16 +16
__pfx_fq_dequeue_skb 16 - -16
fq_dequeue_skb 103 - -103
fq_dequeue 1685 1535 -150
Total: Before=24886569, After=24886583, chg +0.00%
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
---
net/sched/sch_fq.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index 6e5f2f4f241546605f8ba37f96275446c8836eee..d0200ec8ada62e86f10d823556bedcaefb470e6c 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -665,7 +665,7 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
return NULL;
skb = fq_peek(&q->internal);
- if (unlikely(skb)) {
+ if (skb) {
q->internal.qlen--;
fq_dequeue_skb(sch, &q->internal, skb);
goto out;
@@ -716,7 +716,7 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
}
prefetch(&skb->end);
fq_dequeue_skb(sch, f, skb);
- if ((s64)(now - time_next_packet - q->ce_threshold) > 0) {
+ if (unlikely((s64)(now - time_next_packet - q->ce_threshold) > 0)) {
INET_ECN_set_ce(skb);
q->stat_ce_mark++;
}
--
2.53.0.rc2.204.g2597b5adb4-goog
Powered by blists - more mailing lists