lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHS8izPCjjfgfUWVuANcCLs6DLCefAyQL4OKT9g0YQTt2jraKA@mail.gmail.com>
Date: Thu, 26 Jun 2025 09:30:32 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: patchwork-bot+netdevbpf@...nel.org, netdev@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	hawk@...nel.org, davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com, 
	horms@...nel.org, shuah@...nel.org, ilias.apalodimas@...aro.org, toke@...e.dk
Subject: Re: [PATCH net-next v5] page_pool: import Jesper's page_pool benchmark

On Thu, Jun 26, 2025 at 8:23 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Wed, 25 Jun 2025 17:22:56 -0700 Mina Almasry wrote:
> > What I'm hoping to do is:
> >
> > 1. Have nipa run the benchmark always (or at least on patches that
> > touch pp code, if that's possible), and always succeed.
> > 2. The pp reviewers can always check the contest results to manually
> > see if there is a regression. That's still great because it saves us
> > the time of cherry-pick series and running the tests ourselves (or
> > asking submitters to do that).
> > 3. If we notice that the results between runs are stable, then we can
> > change the test to actually fail/warn if it detects a regression (if
> > fast path is > # of instructions, fail).
>
> That's fine. I don't think putting the data on a graphs would be much
> work, and clicking old results out of old runs will be a PITA. Just a
> little parsing in the runner to propagate it into JSON. And a fairly
> trivial bit of charts.js to fetch the runs and render UI.
>
> > 4. If we notice that the results have too much noise, then we can
> > improve the now merged benchmark to somehow make it more consistent.
> >
> > FWIW, when I run the benchmark, I get very repeatable results across
> > runs, especially when measuring the fast path, but nipa's mileage may
> > vary.
>
> 100% on board. But someone with Meta credentials needs to add a runner
> and babysit it, I have enough CI wrangling as is.
>

Of course! If someone with the credentials volunteers that would be
great, if not, no big deal really. We can always get the runs manually
in the meantime. The volume of pp patches isn't that much anyway.

> Or we wait a couple of months until we migrate to a more public setup.

Yes, I'll take a look when/if that happens (I'll watch out for an announcement).

Thanks!

-- 
Thanks,
Mina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ