[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoCCM+m6eJ1VNoeF2UMdFOhMjJ1z2FVUoMJk=js++hk0RQ@mail.gmail.com>
Date: Sun, 29 Jun 2025 18:43:05 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com, jonathan.lemon@...il.com, sdf@...ichev.me,
ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com, joe@...a.to, willemdebruijn.kernel@...il.com
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v6] net: xsk: introduce XDP_MAX_TX_BUDGET set/getsockopt
On Sun, Jun 29, 2025 at 10:51 AM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> On Fri, Jun 27, 2025 at 7:01 PM Jason Xing <kerneljasonxing@...il.com> wrote:
> >
> > From: Jason Xing <kernelxing@...cent.com>
> >
> > This patch provides a setsockopt method to let applications leverage to
> > adjust how many descs to be handled at most in one send syscall. It
> > mitigates the situation where the default value (32) that is too small
> > leads to higher frequency of triggering send syscall.
> >
> > Considering the prosperity/complexity the applications have, there is no
> > absolutely ideal suggestion fitting all cases. So keep 32 as its default
> > value like before.
> >
> > The patch does the following things:
> > - Add XDP_MAX_TX_BUDGET socket option.
> > - Convert TX_BATCH_SIZE to tx_budget_spent.
> > - Set tx_budget_spent to 32 by default in the initialization phase as a
> > per-socket granular control. 32 is also the min value for
> > tx_budget_spent.
> > - Set the range of tx_budget_spent as [32, xs->tx->nentries].
> >
> > The idea behind this comes out of real workloads in production. We use a
> > user-level stack with xsk support to accelerate sending packets and
> > minimize triggering syscalls. When the packets are aggregated, it's not
> > hard to hit the upper bound (namely, 32). The moment user-space stack
> > fetches the -EAGAIN error number passed from sendto(), it will loop to try
> > again until all the expected descs from tx ring are sent out to the driver.
> > Enlarging the XDP_MAX_TX_BUDGET value contributes to less frequency of
> > sendto() and higher throughput/PPS.
> >
> > Here is what I did in production, along with some numbers as follows:
> > For one application I saw lately, I suggested using 128 as max_tx_budget
> > because I saw two limitations without changing any default configuration:
> > 1) XDP_MAX_TX_BUDGET, 2) socket sndbuf which is 212992 decided by
> > net.core.wmem_default. As to XDP_MAX_TX_BUDGET, the scenario behind
> > this was I counted how many descs are transmitted to the driver at one
> > time of sendto() based on [1] patch and then I calculated the
> > possibility of hitting the upper bound. Finally I chose 128 as a
> > suitable value because 1) it covers most of the cases, 2) a higher
> > number would not bring evident results. After twisting the parameters,
> > a stable improvement of around 4% for both PPS and throughput and less
> > resources consumption were found to be observed by strace -c -p xxx:
> > 1) %time was decreased by 7.8%
> > 2) error counter was decreased from 18367 to 572
>
> More interesting numbers are arriving here as I run some benchmarks
> from xdp-project/bpf-examples/AF_XDP-example/ in my VM.
>
> Running "sudo taskset -c 2 ./xdpsock -i eth0 -q 1 -l -N -t -b 256"
>
> Using the default configure 32 as the max budget iteration:
> sock0@...0:1 txonly xdp-drv
> pps pkts 1.01
> rx 0 0
> tx 48,574 49,152
>
> Enlarging the value to 256:
> sock0@...0:1 txonly xdp-drv
> pps pkts 1.00
> rx 0 0
> tx 148,277 148,736
>
> Enlarging the value to 512:
> sock0@...0:1 txonly xdp-drv
> pps pkts 1.00
> rx 0 0
> tx 226,306 227,072
>
> The performance of pps goes up by 365% (with max budget set as 512)
> which is an incredible number :)
Weird thing. I purchased another VM and didn't manage to see such a
huge improvement.... Good luck is that I own that good machine which
is still reproducible and I'm still digging in it. So please ignore
this noise for now :|
Thanks,
Jason
Powered by blists - more mailing lists