lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Apr 2008 06:39:39 +0200
From:	Willy Tarreau <w@....eu>
To:	david@...g.hm
Cc:	Stephen Clark <sclark46@...thlink.net>,
	Evgeniy Polyakov <johnpol@....mipt.ru>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Tilman Schmidt <tilman@...p.cc>, Valdis.Kletnieks@...edu,
	Mark Lord <lkml@....ca>, David Miller <davem@...emloft.net>,
	jesper.juhl@...il.com, yoshfuji@...ux-ipv6.org, jeff@...zik.org,
	linux-kernel <linux-kernel@...r.kernel.org>, git@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: Reporting bugs and bisection

On Sun, Apr 13, 2008 at 04:51:34PM -0700, david@...g.hm wrote:
> cross-posted to git for the suggestion at the bottom
> 
> On Sun, 13 Apr 2008, Stephen Clark wrote:
> 
> >Evgeniy Polyakov wrote:
> >>On Sun, Apr 13, 2008 at 10:33:49PM +0200, Rafael J. Wysocki (rjw@...k.pl) 
> >>wrote:
> >>>Things like this are very disappointing and have a very negative impact 
> >>>on bug
> >>>reporters.  We should do our best to avoid them.
> >>
> >>Shit happens. This is a matter of either bug report or those who were in
> >>the copy list. There are different people and different situations, in
> >>which they do not reply.
> >>
> >Well less shit would happen if developers would take the time to at least 
> >test their patches before they were submitted. It like we will just have 
> >the poor user do our testing for us. What kind of testing do developers 
> >do. I been a linux user and have followed the LKML for a number of years 
> >and have yet to see
> >any test plans for any submitted patches.
> 
> I've been reading LKML for 11 years now, I've tested kernels and reported 
> a few bugs along the way.
> 
> the expectation is that the submitter should have tested the patches 
> before submitting them (where hardware allows). but that "where hardware 
> allows" is a big problem. so many issues are dependant on hardwre that 
> it's not possible to test everything.
> 
> there are people who download, compile and test the tree nightly (with 
> farms of machines to test different configs), but they can't catch 
> everything.
> 
> expecting the patches to be tested to the point where there are no bugs is 
> unreasonable.
[...]

Agreed. The difficulty is that only the developer knows how confident
he is in his code. Even the subsystem maintainer does not know, which
is the real issue since as long as the code is not identified, he does
not know whom to ping.

And I think that it might help if we could add a "Trust" rating to the
patches we submit, similarly to "Tested-By" or "Signed-off-by". We could
use 1 to 5. Basically, when the patch was completed at 3am and just builds,
it's more likely 1/5. When it has been stressed for 1 week, it would be
4/5. 5/5 would only be used in backports of known working code, for some
wide-used external patches, or for trivial patches (eg: doc/whitespace
fixes). The goal would clearly not be to just trust patches with a high
rate (since they might break when associated with others), but for the
subsystem maintainer to quickly check if there are some of them the
author does not 100% trust, in which case he could ping the author to
check if his patch *may* cause the reported problem.

What makes this rating system delicate is that the rate cannot be changed
afterwards. But after all, that's not much of a problem. A bug may very
well reveal itself one year after the code was merged, so it's really the
developer's estimation which matters.

For this to be efficiently used, we would need git-commit to accept a
new "-T <rating>" argument with the following possible values :

   0: untested (default)
   1: builds
   2: seems to be working
   3: passed basic non-regression tests
   4: survived stress testing at the developer's
   5: known to be working for a long time somewhere else

I'm sure many people would find this useless (or in fact reject the
idea because it would show that most code will be rated 1 or 2),
but I really think it can help subsystem maintainers make the relation
between a reported bug and a possible submitter.

Willy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists