lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iLidq+WTYkg2-U6g8tK5W=squKoQcYECc=RjF_h7-g-wg@mail.gmail.com>
Date: Fri, 24 Oct 2025 07:30:09 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>, 
	Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com, 
	Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next] net: optimize enqueue_to_backlog() for the fast path

On Fri, Oct 24, 2025 at 7:03 AM Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
>
> Eric Dumazet wrote:
> > Add likely() and unlikely() clauses for the common cases:
> >
> > Device is running.
> > Queue is not full.
> > Queue is less than half capacity.
> >
> > Add max_backlog parameter to skb_flow_limit() to avoid
> > a second READ_ONCE(net_hotdata.max_backlog).
> >
> > skb_flow_limit() does not need the backlog_lock protection,
> > and can be called before we acquire the lock, for even better
> > resistance to attacks.
> >
> > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > Cc: Willem de Bruijn <willemb@...gle.com>
> > ---
> >  net/core/dev.c | 18 ++++++++++--------
> >  1 file changed, 10 insertions(+), 8 deletions(-)
> >
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 378c2d010faf251ffd874ebf0cc3dd6968eee447..d32f0b0c03bbd069d3651f5a6b772c8029baf96c 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -5249,14 +5249,15 @@ void kick_defer_list_purge(unsigned int cpu)
> >  int netdev_flow_limit_table_len __read_mostly = (1 << 12);
> >  #endif
> >
> > -static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen)
> > +static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen,
> > +                        int max_backlog)
> >  {
> >  #ifdef CONFIG_NET_FLOW_LIMIT
> > -     struct sd_flow_limit *fl;
> > -     struct softnet_data *sd;
> >       unsigned int old_flow, new_flow;
> > +     const struct softnet_data *sd;
> > +     struct sd_flow_limit *fl;
> >
> > -     if (qlen < (READ_ONCE(net_hotdata.max_backlog) >> 1))
> > +     if (likely(qlen < (max_backlog >> 1)))
> >               return false;
> >
> >       sd = this_cpu_ptr(&softnet_data);
>
> I assume sd is warm here. Else we could even move skb_flow_limit
> behind a static_branch seeing how rarely it is likely used.

this_cpu_ptr(&ANY_VAR) only loads very hot this_cpu_off. In modern
kernels this is

DEFINE_PER_CPU_CACHE_HOT(unsigned long, this_cpu_off);

rest is in the offsets used in the code.

>
> > @@ -5301,19 +5302,19 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
> >       u32 tail;
> >
> >       reason = SKB_DROP_REASON_DEV_READY;
> > -     if (!netif_running(skb->dev))
> > +     if (unlikely(!netif_running(skb->dev)))
> >               goto bad_dev;
>
> Isn't unlikely usually predicted for branches without an else?

I am not sure this is a hardcoded rule that all compilers will stick with.
Do you have a reference ?

>
> And that is ignoring both FDO and actual branch prediction hardware
> improving on the simple compiler heuristic.

Lets not assume FDO is always used, and close the gap.
This will allow us to iterate faster.
FDO brings its own class of problems...

>
> No immediately concerns. Just want to avoid precedence for others
> to sprinkle code with likely/unlikely with abandon. As is sometimes
> seen.

Sure.

I have not included a change on the apparently _very_ expensive

if (!__test_and_set_bit(NAPI_STATE_SCHED,
                                    &sd->backlog.state))

btsq   $0x0,0x160(%r13)

I tried to test the bit, then set it if needed, but got no
improvement, for some reason
(This was after the other patch making sure to group the dirtied
fields in a single cache line)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ