lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM0EoM=aJdS5D1eHUJHHA4outzzriZEKbByfmOTOoPxNih4Wmw@mail.gmail.com>
Date: Wed, 4 Feb 2026 13:24:48 -0500
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>, Jiri Pirko <jiri@...nulli.us>, 
	netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next] net_sched: sch_fq: tweak unlikely() hints in fq_dequeue()

On Tue, Feb 3, 2026 at 4:47 PM Eric Dumazet <edumazet@...gle.com> wrote:
>
> After 076433bd78d7 ("net_sched: sch_fq: add fast path
> for mostly idle qdisc") we need to remove one unlikely()
> because q->internal holds all the fast path packets.
>
>        skb = fq_peek(&q->internal);
>        if (unlikely(skb)) {
>                 q->internal.qlen--;
>
> Calling INET_ECN_set_ce() is very unlikely.
>
> These changes allow fq_dequeue_skb() to be (auto)inlined,
> thus making fq_dequeue() faster.
>
> $ scripts/bloat-o-meter -t vmlinux.0 vmlinux
> add/remove: 2/2 grow/shrink: 0/1 up/down: 283/-269 (14)
> Function                                     old     new   delta
> INET_ECN_set_ce                                -     267    +267
> __pfx_INET_ECN_set_ce                          -      16     +16
> __pfx_fq_dequeue_skb                          16       -     -16
> fq_dequeue_skb                               103       -    -103
> fq_dequeue                                  1685    1535    -150
> Total: Before=24886569, After=24886583, chg +0.00%
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
>  net/sched/sch_fq.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
> index 6e5f2f4f241546605f8ba37f96275446c8836eee..d0200ec8ada62e86f10d823556bedcaefb470e6c 100644
> --- a/net/sched/sch_fq.c
> +++ b/net/sched/sch_fq.c
> @@ -665,7 +665,7 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
>                 return NULL;
>
>         skb = fq_peek(&q->internal);
> -       if (unlikely(skb)) {
> +       if (skb) {
>                 q->internal.qlen--;
>                 fq_dequeue_skb(sch, &q->internal, skb);
>                 goto out;
> @@ -716,7 +716,7 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
>                 }
>                 prefetch(&skb->end);
>                 fq_dequeue_skb(sch, f, skb);
> -               if ((s64)(now - time_next_packet - q->ce_threshold) > 0) {
> +               if (unlikely((s64)(now - time_next_packet - q->ce_threshold) > 0)) {
>                         INET_ECN_set_ce(skb);
>                         q->stat_ce_mark++;
>                 }

While it looks rational you didnt mention any numbers.
Iam curious, is it always _guaranteed_ that inlining improves performance?

Reviewed-by: Jamal Hadi Salim <jhs@...atatu.com>

cheers,
jamal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ