[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL+tcoCC8yVS9R9bky4XatgJmX4bzrV8Pio6+jwyMSmKo0UiSw@mail.gmail.com>
Date: Thu, 14 Aug 2025 08:33:20 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, horms@...nel.org, andrew+netdev@...n.ch,
anthony.l.nguyen@...el.com, przemyslaw.kitszel@...el.com, sdf@...ichev.me,
larysa.zaremba@...el.com, intel-wired-lan@...ts.osuosl.org,
netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx
interfaces to increase performance
On Thu, Aug 14, 2025 at 2:09 AM Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> On Wed, Aug 13, 2025 at 08:34:52AM +0800, Jason Xing wrote:
> > Hi Maciej,
> >
> > On Tue, Aug 12, 2025 at 11:42 PM Maciej Fijalkowski
> > <maciej.fijalkowski@...el.com> wrote:
> > >
> > > On Tue, Aug 12, 2025 at 03:55:04PM +0800, Jason Xing wrote:
> > > > From: Jason Xing <kernelxing@...cent.com>
> > > >
> > >
> > > Hi Jason,
> > >
> > > patches should be targetted at iwl-next as these are improvements, not
> > > fixes.
> >
> > Oh, right.
> >
> > >
> > > > Like what i40e driver initially did in commit 3106c580fb7cf
> > > > ("i40e: Use batched xsk Tx interfaces to increase performance"), use
> > > > the batched xsk feature to transmit packets.
> > > >
> > > > Signed-off-by: Jason Xing <kernelxing@...cent.com>
> > > > ---
> > > > In this version, I still choose use the current implementation. Last
> > > > time at the first glance, I agreed 'i' is useless but it is not.
> > > > https://lore.kernel.org/intel-wired-lan/CAL+tcoADu-ZZewsZzGDaL7NugxFTWO_Q+7WsLHs3Mx-XHjJnyg@mail.gmail.com/
> > >
> > > dare to share the performance improvement (if any, in the current form)?
> >
> > I tested the whole series, sorry, no actual improvement could be seen
> > through xdpsock. Not even with the first series. :(
>
> So if i were you i would hesitate with posting it :P in the past batching
(I'm definitely not an intel nic expert but still willing to write
some codes on the driver side. I need to study more.)
> approaches always yielded performance gain.
No, I still assume no better numbers can be seen with xdpsock even
with further tweaks. Especially yesterday I saw the zerocopy mode
already hit 70% of full speed, which means in all likelihood that is
the bottleneck. That is also the answer to what you questioned in that
patch[0]. Zerocopy mode for most advanced NICs must be much better for
copy mode except for ixgbe, somehow standing for the maximum
throughput of af_xdp.
[0]: https://lore.kernel.org/all/CAL+tcoAst1xs=xCLykUoj1=Vj-0LtVyK-qrcDyoy4mQrHgW1kg@mail.gmail.com/
>
> >
> > >
> > > also you have not mentioned in v1->v2 that you dropped the setting of
> > > xdp_zc_max_segs, which is a step in a correct path.
In v1, you asked me to give up the multi buffer function[1] so I did
it. Yesterday, I wrongly corrected myself and made me think
xdp_zc_max_segs is related to the batch process.
IIUC, you have these multi buffer patches locally or you decided to
accomplish them?
[1]: https://lore.kernel.org/intel-wired-lan/aINVrP8vrxIkxhZr@boxer/
> >
> > Oops, I blindly dropped the last patch without carefully checking it.
> > Thanks for showing me.
> >
> > I set it as four for ixgbe. I'm not that sure if there is any theory
> > behind setting this value?
>
> you're confusing two different things. xdp_zc_max_segs is related to
> multi-buffer support in xsk zc whereas you're referring to loop unrolling
> counter.
No, actually I'm confusing the idea behind the value of xdp_zc_max_segs.
>
> >
> > >
> > > > ---
> > > > drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 106 +++++++++++++------
> > > > 1 file changed, 72 insertions(+), 34 deletions(-)
> > > >
> > > > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > > index f3d3f5c1cdc7..9fe2c4bf8bc5 100644
> > > > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > > @@ -2,12 +2,15 @@
> > > > /* Copyright(c) 2018 Intel Corporation. */
> > > >
> > > > #include <linux/bpf_trace.h>
> > > > +#include <linux/unroll.h>
> > > > #include <net/xdp_sock_drv.h>
> > > > #include <net/xdp.h>
> > > >
> > > > #include "ixgbe.h"
> > > > #include "ixgbe_txrx_common.h"
> > > >
> > > > +#define PKTS_PER_BATCH 4
> > > > +
> > > > struct xsk_buff_pool *ixgbe_xsk_pool(struct ixgbe_adapter *adapter,
> > > > struct ixgbe_ring *ring)
> > > > {
> > > > @@ -388,58 +391,93 @@ void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring)
> > > > }
> > > > }
> > > >
> > > > -static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> > > > +static void ixgbe_set_rs_bit(struct ixgbe_ring *xdp_ring)
> > > > +{
> > > > + u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
> > > > + union ixgbe_adv_tx_desc *tx_desc;
> > > > +
> > > > + tx_desc = IXGBE_TX_DESC(xdp_ring, ntu);
> > > > + tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD_RS);
> > >
> > > you have not addressed the descriptor cleaning path which makes this
> > > change rather pointless or even the driver behavior is broken.
> >
> > Are you referring to 'while (ntc != ntu) {}' in
> > ixgbe_clean_xdp_tx_irq()? But I see no difference between that part
> > and the similar part 'for (i = 0; i < completed_frames; i++) {}' in
> > i40e_clean_xdp_tx_irq()
>
> if (likely(!tx_ring->xdp_tx_active)) {
> xsk_frames = completed_frames;
> goto skip;
> }
Thanks for the instruction. I will append a patch similar to this[2]
into the series. It's exactly the one that helps ramping up speed.
[2]:
commit 5574ff7b7b3d864556173bf822796593451a6b8c
Author: Magnus Karlsson <magnus.karlsson@...el.com>
Date: Tue Jun 23 11:44:16 2020 +0200
i40e: optimize AF_XDP Tx completion path
Improve the performance of the AF_XDP zero-copy Tx completion
path. When there are no XDP buffers being sent using XDP_TX or
XDP_REDIRECT, we do not have go through the SW ring to clean up any
entries since the AF_XDP path does not use these. In these cases, just
fast forward the next-to-use counter and skip going through the SW
ring. The limit on the maximum number of entries to complete is also
removed since the algorithm is now O(1). To simplify the code path, the
maximum number of entries to complete for the XDP path is therefore
also increased from 256 to 512 (the default number of Tx HW
descriptors). This should be fine since the completion in the XDP path
is faster than in the SKB path that has 256 as the maximum number.
> >
> > >
> > > point of such change is to limit the interrupts raised by HW once it is
> > > done with sending the descriptor. you still walk the descs one-by-one in
> > > ixgbe_clean_xdp_tx_irq().
> >
> > Sorry, I must be missing something important. In my view only at the
> > end of ixgbe_xmit_zc(), ixgbe always kicks the hardware through
> > ixgbe_xdp_ring_update_tail() before/after this series.
> >
> > As to 'one-by-one', I see i40e also handles like that in 'for (i = 0;
> > i < completed_frames; i++)' in i40e_clean_xdp_tx_irq(). Ice does this
> > in ice_clean_xdp_irq_zc()?
>
> i40e does not look up DD bit from descriptor. plus this loop you refer to
> is taken only when (see above) xdp_tx_active is not 0 (meaning that there
> have been some XDP_TX action on queue and we have to clean the buffer in a
> different way).
I think until now I know what to do next: implement xdp_tx_active function.
>
> in general i would advise to look at ice as i40e writes back the tx ring
> head which is used in cleaning logic. ice does not have this feature,
> neither does ixgbe.
Thanks. I will also dig into those datasheets that are all I have.
Thanks,
Jason
Powered by blists - more mailing lists