[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iKY7uMX41aLZA6cFXbjR49Z+WCSd7DgZDkTqXxfeqnXmg@mail.gmail.com>
Date: Fri, 7 Nov 2025 07:46:03 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>, Kuniyuki Iwashima <kuniyu@...gle.com>,
Willem de Bruijn <willemb@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH v1 net-next 5/5] net: dev_queue_xmit() llist adoption
On Fri, Nov 7, 2025 at 7:37 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Fri, Nov 7, 2025 at 7:28 AM Toke Høiland-Jørgensen <toke@...hat.com> wrote:
> >
> > Eric Dumazet <edumazet@...gle.com> writes:
> >
> > > Remove busylock spinlock and use a lockless list (llist)
> > > to reduce spinlock contention to the minimum.
> > >
> > > Idea is that only one cpu might spin on the qdisc spinlock,
> > > while others simply add their skb in the llist.
> > >
> > > After this patch, we get a 300 % improvement on heavy TX workloads.
> > > - Sending twice the number of packets per second.
> > > - While consuming 50 % less cycles.
> > >
> > > Note that this also allows in the future to submit batches
> > > to various qdisc->enqueue() methods.
> > >
> > > Tested:
> > >
> > > - Dual Intel(R) Xeon(R) 6985P-C (480 hyper threads).
> > > - 100Gbit NIC, 30 TX queues with FQ packet scheduler.
> > > - echo 64 >/sys/kernel/slab/skbuff_small_head/cpu_partial (avoid contention in mm)
> > > - 240 concurrent "netperf -t UDP_STREAM -- -m 120 -n"
> >
> > Hi Eric
> >
> > While testing this with sch_cake (to get a new baseline for the mq_cake
> > patches as Jamal suggested), I found that this patch completely destroys
> > the performance of cake in particular.
> >
> > I run a small UDP test (64-byte packets across 16 flows through
> > xdp-trafficgen, offered load is ~5Mpps) with a single cake instance on
> > as the root interface qdisc.
> >
> > With a stock Fedora (6.17.7) kernel, this gets me around 630 Kpps across
> > 8 queues (on an E810-C, ice driver):
> >
> > Ethtool(ice0p1 ) stat: 40321218 ( 40,321,218) <= tx_bytes /sec
> > Ethtool(ice0p1 ) stat: 42841424 ( 42,841,424) <= tx_bytes.nic /sec
> > Ethtool(ice0p1 ) stat: 5248505 ( 5,248,505) <= tx_queue_0_bytes /sec
> > Ethtool(ice0p1 ) stat: 82008 ( 82,008) <= tx_queue_0_packets /sec
> > Ethtool(ice0p1 ) stat: 3425984 ( 3,425,984) <= tx_queue_1_bytes /sec
> > Ethtool(ice0p1 ) stat: 53531 ( 53,531) <= tx_queue_1_packets /sec
> > Ethtool(ice0p1 ) stat: 5277496 ( 5,277,496) <= tx_queue_2_bytes /sec
> > Ethtool(ice0p1 ) stat: 82461 ( 82,461) <= tx_queue_2_packets /sec
> > Ethtool(ice0p1 ) stat: 5285736 ( 5,285,736) <= tx_queue_3_bytes /sec
> > Ethtool(ice0p1 ) stat: 82590 ( 82,590) <= tx_queue_3_packets /sec
> > Ethtool(ice0p1 ) stat: 5280731 ( 5,280,731) <= tx_queue_4_bytes /sec
> > Ethtool(ice0p1 ) stat: 82511 ( 82,511) <= tx_queue_4_packets /sec
> > Ethtool(ice0p1 ) stat: 5275665 ( 5,275,665) <= tx_queue_5_bytes /sec
> > Ethtool(ice0p1 ) stat: 82432 ( 82,432) <= tx_queue_5_packets /sec
> > Ethtool(ice0p1 ) stat: 5276398 ( 5,276,398) <= tx_queue_6_bytes /sec
> > Ethtool(ice0p1 ) stat: 82444 ( 82,444) <= tx_queue_6_packets /sec
> > Ethtool(ice0p1 ) stat: 5250946 ( 5,250,946) <= tx_queue_7_bytes /sec
> > Ethtool(ice0p1 ) stat: 82046 ( 82,046) <= tx_queue_7_packets /sec
> > Ethtool(ice0p1 ) stat: 1 ( 1) <= tx_restart /sec
> > Ethtool(ice0p1 ) stat: 630023 ( 630,023) <= tx_size_127.nic /sec
> > Ethtool(ice0p1 ) stat: 630019 ( 630,019) <= tx_unicast /sec
> > Ethtool(ice0p1 ) stat: 630020 ( 630,020) <= tx_unicast.nic /sec
> >
> > However, running the same test on a net-next kernel, performance drops
> > to round 10 Kpps(!):
> >
> > Ethtool(ice0p1 ) stat: 679003 ( 679,003) <= tx_bytes /sec
> > Ethtool(ice0p1 ) stat: 721440 ( 721,440) <= tx_bytes.nic /sec
> > Ethtool(ice0p1 ) stat: 123539 ( 123,539) <= tx_queue_0_bytes /sec
> > Ethtool(ice0p1 ) stat: 1930 ( 1,930) <= tx_queue_0_packets /sec
> > Ethtool(ice0p1 ) stat: 1776 ( 1,776) <= tx_queue_1_bytes /sec
> > Ethtool(ice0p1 ) stat: 28 ( 28) <= tx_queue_1_packets /sec
> > Ethtool(ice0p1 ) stat: 1837 ( 1,837) <= tx_queue_2_bytes /sec
> > Ethtool(ice0p1 ) stat: 29 ( 29) <= tx_queue_2_packets /sec
> > Ethtool(ice0p1 ) stat: 1776 ( 1,776) <= tx_queue_3_bytes /sec
> > Ethtool(ice0p1 ) stat: 28 ( 28) <= tx_queue_3_packets /sec
> > Ethtool(ice0p1 ) stat: 1654 ( 1,654) <= tx_queue_4_bytes /sec
> > Ethtool(ice0p1 ) stat: 26 ( 26) <= tx_queue_4_packets /sec
> > Ethtool(ice0p1 ) stat: 222026 ( 222,026) <= tx_queue_5_bytes /sec
> > Ethtool(ice0p1 ) stat: 3469 ( 3,469) <= tx_queue_5_packets /sec
> > Ethtool(ice0p1 ) stat: 183072 ( 183,072) <= tx_queue_6_bytes /sec
> > Ethtool(ice0p1 ) stat: 2861 ( 2,861) <= tx_queue_6_packets /sec
> > Ethtool(ice0p1 ) stat: 143322 ( 143,322) <= tx_queue_7_bytes /sec
> > Ethtool(ice0p1 ) stat: 2239 ( 2,239) <= tx_queue_7_packets /sec
> > Ethtool(ice0p1 ) stat: 10609 ( 10,609) <= tx_size_127.nic /sec
> > Ethtool(ice0p1 ) stat: 10609 ( 10,609) <= tx_unicast /sec
> > Ethtool(ice0p1 ) stat: 10609 ( 10,609) <= tx_unicast.nic /sec
> >
> > Reverting commit 100dfa74cad9 ("net: dev_queue_xmit() llist adoption")
> > (and the followon f8a55d5e71e6 ("net: add a fast path in
> > __netif_schedule()"), but that alone makes no difference) gets me back
> > to the previous 630-650 Kpps range.
> >
> > I couldn't find any other qdisc that suffers in the same way (tried
> > fq_codel, sfq and netem as single root qdiscs), so this seems to be some
> > specific interaction between the llist implementation and sch_cake. Any
> > idea what could be causing this?
>
> I would take a look at full "tc -s -d qdisc" and see if anything
> interesting is showing up (requeues ?)
>
> ALso look if you have drops (perf record -a -e skb:kfree_skb)
>
> You are sharing one qdisc on 8 queues ?
I also assume you are running net-next, because final patch was a bit different
int count = 0;
llist_for_each_entry_safe(skb, next, ll_list, ll_node) {
prefetch(next);
skb_mark_not_on_list(skb);
rc = dev_qdisc_enqueue(skb, q, &to_free, txq);
count++;
}
qdisc_run(q);
if (count != 1)
rc = NET_XMIT_SUCCESS;
Powered by blists - more mailing lists