[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoDuBkd=pMR+mrUJgLpCRdh1cQcPBEH9rvnMJtXU242MHQ@mail.gmail.com>
Date: Thu, 3 Jul 2025 21:11:59 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: Paolo Abeni <pabeni@...hat.com>, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, bjorn@...nel.org, magnus.karlsson@...el.com,
jonathan.lemon@...il.com, sdf@...ichev.me, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com, joe@...a.to,
willemdebruijn.kernel@...il.com, bpf@...r.kernel.org, netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v6] net: xsk: introduce XDP_MAX_TX_BUDGET set/getsockopt
On Thu, Jul 3, 2025 at 8:29 PM Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> On Thu, Jul 03, 2025 at 04:22:21PM +0800, Jason Xing wrote:
> > On Thu, Jul 3, 2025 at 4:15 PM Paolo Abeni <pabeni@...hat.com> wrote:
> > >
> > > On 6/27/25 1:01 PM, Jason Xing wrote:
> > > > From: Jason Xing <kernelxing@...cent.com>
> > > >
> > > > This patch provides a setsockopt method to let applications leverage to
> > > > adjust how many descs to be handled at most in one send syscall. It
> > > > mitigates the situation where the default value (32) that is too small
> > > > leads to higher frequency of triggering send syscall.
> > > >
> > > > Considering the prosperity/complexity the applications have, there is no
> > > > absolutely ideal suggestion fitting all cases. So keep 32 as its default
> > > > value like before.
> > > >
> > > > The patch does the following things:
> > > > - Add XDP_MAX_TX_BUDGET socket option.
> > > > - Convert TX_BATCH_SIZE to tx_budget_spent.
> > > > - Set tx_budget_spent to 32 by default in the initialization phase as a
> > > > per-socket granular control. 32 is also the min value for
> > > > tx_budget_spent.
> > > > - Set the range of tx_budget_spent as [32, xs->tx->nentries].
> > > >
> > > > The idea behind this comes out of real workloads in production. We use a
> > > > user-level stack with xsk support to accelerate sending packets and
> > > > minimize triggering syscalls. When the packets are aggregated, it's not
> > > > hard to hit the upper bound (namely, 32). The moment user-space stack
> > > > fetches the -EAGAIN error number passed from sendto(), it will loop to try
> > > > again until all the expected descs from tx ring are sent out to the driver.
> > > > Enlarging the XDP_MAX_TX_BUDGET value contributes to less frequency of
> > > > sendto() and higher throughput/PPS.
> > > >
> > > > Here is what I did in production, along with some numbers as follows:
> > > > For one application I saw lately, I suggested using 128 as max_tx_budget
> > > > because I saw two limitations without changing any default configuration:
> > > > 1) XDP_MAX_TX_BUDGET, 2) socket sndbuf which is 212992 decided by
> > > > net.core.wmem_default. As to XDP_MAX_TX_BUDGET, the scenario behind
> > > > this was I counted how many descs are transmitted to the driver at one
> > > > time of sendto() based on [1] patch and then I calculated the
> > > > possibility of hitting the upper bound. Finally I chose 128 as a
> > > > suitable value because 1) it covers most of the cases, 2) a higher
> > > > number would not bring evident results. After twisting the parameters,
> > > > a stable improvement of around 4% for both PPS and throughput and less
> > > > resources consumption were found to be observed by strace -c -p xxx:
> > > > 1) %time was decreased by 7.8%
> > > > 2) error counter was decreased from 18367 to 572
> > > >
> > > > [1]: https://lore.kernel.org/all/20250619093641.70700-1-kerneljasonxing@gmail.com/
> > > >
> > > > Signed-off-by: Jason Xing <kernelxing@...cent.com>
> > >
> > > LGTM, waiting a little more for an explicit an ack from XDP maintainers.
> >
> > Thanks. No problem.
>
> Hey! i did review. Jason sorry but I got confused that you need to sort
> out the performance results on your side, hence the silence.
Thanks for the review. My environment doesn't allow me to continue the
xdpsock experiment because of many limitations from the host side.
>
> >
> > >
> > > Side note: it could be useful to extend the xdp selftest to trigger the
> > > new code path.
> >
> > Roger that, sir. I will do it after this gets merged, maybe later this
> > month, still studying for various tests in recent days :)
>
> IMHO nothing worth testing with this patch per-se, it's rather the matter
> of performance.
>
> I would like however to ask you for follow-up with patch against xdpsock
> that adds support for using this new setsockopt (once we accept this onto
> kernel).
That xdp-project in the github. I will finish it after it's done.
Thanks,
Jason
>
> >
> > Thanks,
> > Jason
Powered by blists - more mailing lists