[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <willemdebruijn.kernel.1ba874bc7bc@gmail.com>
Date: Fri, 24 Oct 2025 10:48:20 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Eric Dumazet <edumazet@...gle.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>,
Kuniyuki Iwashima <kuniyu@...gle.com>,
netdev@...r.kernel.org,
eric.dumazet@...il.com,
Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next] net: optimize enqueue_to_backlog() for the fast
path
Eric Dumazet wrote:
> On Fri, Oct 24, 2025 at 7:03 AM Willem de Bruijn
> <willemdebruijn.kernel@...il.com> wrote:
> >
> > Eric Dumazet wrote:
> > > Add likely() and unlikely() clauses for the common cases:
> > >
> > > Device is running.
> > > Queue is not full.
> > > Queue is less than half capacity.
> > >
> > > Add max_backlog parameter to skb_flow_limit() to avoid
> > > a second READ_ONCE(net_hotdata.max_backlog).
> > >
> > > skb_flow_limit() does not need the backlog_lock protection,
> > > and can be called before we acquire the lock, for even better
> > > resistance to attacks.
> > >
> > > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > > Cc: Willem de Bruijn <willemb@...gle.com>
Reviewed-by: Willem de Bruijn <willemb@...gle.com>
> > > ---
> > > net/core/dev.c | 18 ++++++++++--------
> > > 1 file changed, 10 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index 378c2d010faf251ffd874ebf0cc3dd6968eee447..d32f0b0c03bbd069d3651f5a6b772c8029baf96c 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -5249,14 +5249,15 @@ void kick_defer_list_purge(unsigned int cpu)
> > > int netdev_flow_limit_table_len __read_mostly = (1 << 12);
> > > #endif
> > >
> > > -static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen)
> > > +static bool skb_flow_limit(struct sk_buff *skb, unsigned int qlen,
> > > + int max_backlog)
> > > {
> > > #ifdef CONFIG_NET_FLOW_LIMIT
> > > - struct sd_flow_limit *fl;
> > > - struct softnet_data *sd;
> > > unsigned int old_flow, new_flow;
> > > + const struct softnet_data *sd;
> > > + struct sd_flow_limit *fl;
> > >
> > > - if (qlen < (READ_ONCE(net_hotdata.max_backlog) >> 1))
> > > + if (likely(qlen < (max_backlog >> 1)))
> > > return false;
> > >
> > > sd = this_cpu_ptr(&softnet_data);
> >
> > I assume sd is warm here. Else we could even move skb_flow_limit
> > behind a static_branch seeing how rarely it is likely used.
>
> this_cpu_ptr(&ANY_VAR) only loads very hot this_cpu_off. In modern
> kernels this is
>
> DEFINE_PER_CPU_CACHE_HOT(unsigned long, this_cpu_off);
>
> rest is in the offsets used in the code.
>
> >
> > > @@ -5301,19 +5302,19 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
> > > u32 tail;
> > >
> > > reason = SKB_DROP_REASON_DEV_READY;
> > > - if (!netif_running(skb->dev))
> > > + if (unlikely(!netif_running(skb->dev)))
> > > goto bad_dev;
> >
> > Isn't unlikely usually predicted for branches without an else?
>
> I am not sure this is a hardcoded rule that all compilers will stick with.
> Do you have a reference ?
Actually I was thinking CPU branch prediction if no prior data.
According to the Intel® 64 and IA-32 Architectures
Optimization Reference Manual, Aug 2023, 3.4.1.2 Static Prediction
Branches that do not have a history in the BTB (see Section 3.4.1)
are predicted using a static prediction algorithm:
- Predict forward conditional branches to be NOT taken.
[..]
But online threads mention that there even for x86_64 between
microarch generations there are differences on the actual
prediction behavior, as well as of explicit prediction hints.
And that's only Intel x86_64. So not a universal guide, perhaps.
> >
> > And that is ignoring both FDO and actual branch prediction hardware
> > improving on the simple compiler heuristic.
>
> Lets not assume FDO is always used, and close the gap.
> This will allow us to iterate faster.
> FDO brings its own class of problems...
>
> >
> > No immediately concerns. Just want to avoid precedence for others
> > to sprinkle code with likely/unlikely with abandon. As is sometimes
> > seen.
>
> Sure.
>
> I have not included a change on the apparently _very_ expensive
>
> if (!__test_and_set_bit(NAPI_STATE_SCHED,
> &sd->backlog.state))
>
> btsq $0x0,0x160(%r13)
>
> I tried to test the bit, then set it if needed, but got no
> improvement, for some reason
> (This was after the other patch making sure to group the dirtied
> fields in a single cache line)
Powered by blists - more mailing lists