lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241009175447.GC167360@mit.edu>
Date: Wed, 9 Oct 2024 12:54:47 -0500
From: "Theodore Ts'o" <tytso@....edu>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
        linux-bcachefs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] bcachefs fixes for 6.12-rc2

On Wed, Oct 09, 2024 at 12:17:35AM -0400, Kent Overstreet wrote:
> How many steps are required, start to finish, to test a git branch and
> get the results?

See the quickstart doc.  The TL;DR is (1) do the git clone, (2) "make
; make install" (this is just to set up the paths in the shell scripts
and then copying it to your ~/bin directory, so this takes a second or
so)", and then (3) "install-kconfig ; kbuild ; kvm-xfstests smoke" in
your kernel tree.

> But dashboards are important, as well. And the git log based dashboard
> I've got drastically reduces time spent manually bisecting.

gce-xfstests ltm -c ext4/1k generic/750 --repo ext4.git \
	     --bisect-bad dev --bisect-good origin

With automated bisecting, I don't have to spend any of my personal
time; I just wait for the results to show up in my inbox, without
needing to refer to any dashboards.  :-)


> > In any case, that's why I haven't been interesting in working with
> > your test infrastructure; I have my own, and in my opinion, my
> > approach is the better one to make available to the community, and so
> > when I have time to improve it, I'd much rather work on
> > {kvm,gce,android}-xfstests.
> 
> Well, my setup also isn't tied to xfstests, and it's fairly trivial to
> wrap all of our other (mm, block) tests.

Neither is mine; the name {kvm,gce,qemu,android}-xfstests is the same
for historical reasons.  I have blktests, ltp, stress-ng and the
Phoronix Test Suites wired up (although using comparing against
historical baselines with PTS is a bit manual at the moment).

> But like I said before, I don't particularly care which one wins, as
> long as we're pushing forward with something.

I'd say that in the file system development community there has been a
huge amount of interest in testing, because we all have a general
consensus that testing is support important[1].  Most of us decided
that the "There Can Be Only One" from the Highlander Movie is just not
happening, because everyone's test infrastructures is optimized for
their particular workflow, just as there's a really good reason why
there are 75+ file systems in Linux, and half-dozen or so very popular
general-purpose file systems.

And that's a good thing.

Cheers,

						- Ted

[1] https://docs.google.com/presentation/d/14MKWxzEDZ-JwNh0zNUvMbQa5ZyArZFdblTcF5fUa7Ss/edit#slide=id.g1635d98056_0_45


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ