lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 4 Feb 2010 22:55:19 -0500
From:	tytso@....edu
To:	Dmitry Monakhov <dmonakhov@...nvz.org>
Cc:	"Aneesh Kumar K. V" <aneesh.kumar@...ux.vnet.ibm.com>,
	linux-ext4@...r.kernel.org
Subject: Re: [PATCH 2/2] ext4: fix delalloc retry loop logic v2

On Fri, Feb 05, 2010 at 12:50:15AM +0300, Dmitry Monakhov wrote:
> BTW. I want to deploy automated testing suite in order to test some devel
> trees on daily basis in order to avoid obvious regressions (f.e. when i
> broke ext3+quota). Do you know a good one?

My general rule is that I won't push a patch set to Linus until I run
it against the XFSQA test suite.  There has been talk about adding
generic quota tests (as opposed to the XFS-specific quota tests, since
XFS has its own quota system different from the one used by other
Linux file systems) to XFSQA, and I think there are a few, but clearly
we need to add more.

So if you want to make the biggest impact in terms of trying to avoid
regressions, helping to contribute more tests to the XFSQA test suite
would be the most useful thing to do.  Right now Eric is the only ext4
developer is really familiar with the test suites, and he's added a
few tests, but he's super busy as of late.  I've dabbled with the test
suites a little, and made a few changes, but I haven't added a new
test before, and I'm also super busy as of late.  :-(

> Currently i'm looking in to autotest.kernel.org

Personally, I don't find frameworks for running automated tests to be
that useful.  They have their place, but the problem isn't really
running the tests; the challenge is getting someone to actually *look*
at the results.  Having a set of tests which is easy to set up, and
easy to run, is far more important.

If someone sets up autotest, but I don't have an occasion to look at
the results, it's not terribly useful.  If it's really easy for me to
run the XFSQA test suite, then I'll run it every couple of patches
that I add to the ext4 patch queue, and run the complete set before I
push a set of patches to Linus.  That's **far** more useful.

Automated tests are good, but they tend to be too noisy, and so no one
ever bothers to look at the output.  A useful automated system would
only run tests that had clear and unambiguous failures; be able to
tolerate it if some test starts to fail and still be useful, and then
be able to do git-style bisection searches so it can say, "test NNN
started failing at commit XXX", "test MMM started failing at commit
YYY", etc.  If it then mailed the results the relevant maintainer and
to the people who were the patch authors and the people who signed off
on the patch, then it would have a *chance* of being something that
people actually would pay attention to.  Unfortunately, I don't know
of any automated test framework which fits this bill.  :-(

So instead, I use the discpline of "make check" between almost every
single commit for e2fsprogs, and running "xfsqa -g quick" between most
patches (because the tests take a lot longer to run, I can't afford to
do it between every single patch), and "xfsqa -g auto" before I submit
a patchset to Linus (the most comprehensive set of tests, but it takes
hours so I have to run them overnight).

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ