lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izO9=Q3W9zvq4Qtoi_NGTo6QShV7=rGOjxz3HiAB+6rZyw@mail.gmail.com>
Date: Wed, 25 Jun 2025 17:22:56 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: patchwork-bot+netdevbpf@...nel.org, netdev@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	hawk@...nel.org, davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com, 
	horms@...nel.org, shuah@...nel.org, ilias.apalodimas@...aro.org, toke@...e.dk
Subject: Re: [PATCH net-next v5] page_pool: import Jesper's page_pool benchmark

On Wed, Jun 25, 2025 at 5:03 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Wed, 25 Jun 2025 16:45:49 -0700 Mina Almasry wrote:
> > Thank you for merging this. Kinda of a noob question: does this merge
> > mean that nipa will run this on new submitted patches already? Or do
> > I/someone need to do something to enable that? I've been clicking on
> > the contest for new patches like so:
> >
> > https://netdev.bots.linux.dev/contest.html?pw-n=0&branch=net-next-2025-06-25--21-00
> >
> > But I don't see this benchmark being run anywhere. I looked for docs
> > that already cover this but I couldn't find any.
>
> Right now to add a new TARGET one needs to have SSH access to the
> systems that run the tests :( The process of adding a runner is not
> automated. But this will probably need even more work because it's
> a performance test. We'd need some way of tracking numerical values
> and detecting regressions?
>

I actually did what you suggested earlier, I have the test report the
perf numbers but succeed always.

What I'm hoping to do is:

1. Have nipa run the benchmark always (or at least on patches that
touch pp code, if that's possible), and always succeed.
2. The pp reviewers can always check the contest results to manually
see if there is a regression. That's still great because it saves us
the time of cherry-pick series and running the tests ourselves (or
asking submitters to do that).
3. If we notice that the results between runs are stable, then we can
change the test to actually fail/warn if it detects a regression (if
fast path is > # of instructions, fail).
4. If we notice that the results have too much noise, then we can
improve the now merged benchmark to somehow make it more consistent.

FWIW, when I run the benchmark, I get very repeatable results across
runs, especially when measuring the fast path, but nipa's mileage may
vary.

-- 
Thanks,
Mina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ