[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-SAgPijHtVP6S3n@mini-arch>
Date: Wed, 26 Mar 2025 15:32:32 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: Samiullah Khawaja <skhawaja@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
"David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
almasrymina@...gle.com, willemb@...gle.com, jdamato@...tly.com,
mkarsten@...terloo.ca, netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next] Add xsk_rr an AF_XDP benchmark to measure
latency
On 03/26, Maciej Fijalkowski wrote:
> On Wed, Mar 26, 2025 at 02:12:17PM -0700, Samiullah Khawaja wrote:
> > On Mon, Mar 24, 2025 at 3:32 PM Stanislav Fomichev <stfomichev@...il.com> wrote:
> > >
> > > On 03/20, Samiullah Khawaja wrote:
> > > > Note: This is a benchmarking tool that is used for experiments in the
> > > > upcoming v4 of Napi threaded busypoll series. Not intended to be merged.
> > > >
> > > > xsk_rr is a benchmarking tool to measure latency using AF_XDP between
> > > > two nodes. The benchmark can be run with different arguments to simulate
> > > > traffic:
> > >
> > > We might want to have something like this, but later, once we have NIPA
> > > runners for vendor NICs. The test would have to live in
> > > tools/testing/selftests/drivers/net/hw, have a python executor to run
> > I agree. I can send another version of this for that directory later.
> > > it on host/peer and expose the data in some ingestible/trackable format
> > > (so we can mark it red/green depending on the range on the dashboard).
> > >
> > > But I might be wrong, having flaky (most of them are) perf tests might not
> > > be super valuable.
> >
>
> As you said it's benchmarking tool so I feel like it should land in
> https://github.com/xdp-project/bpf-examples where we have xdpsock that
> have been previously used for benchmarks.
I don't think it matters where the tools live. I'm more interested in
the general guidance on whether we want to continuously run those tools
on NIPA (on real HW) and track the numbers. Unfortunately it's gonna put
extra load on the maintainers in terms of tracking and acting on failures,
but it feels like it's a good direction in general.
Powered by blists - more mailing lists