lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <804dabb00804300748x3f363710k5bc8f5728a1587e6@mail.gmail.com>
Date:	Wed, 30 Apr 2008 22:48:47 +0800
From:	"Peter Teoh" <htmldeveloper@...il.com>
To:	"David Miller" <davem@...emloft.net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Slow DOWN, please!!!

On Wed, Apr 30, 2008 at 10:03 AM, David Miller <davem@...emloft.net> wrote:
>
>  This is starting to get beyond frustrating for me.
>
>  Yesterday, I spent the whole day bisecting boot failures
>  on my system due to the totally untested linux/bitops.h
>  optimization, which I fully analyzed and debugged.
>
>  Today, I had hoped that I could get some work done of my
>  own, but that's not the case.
>
>  Yet another bootup regression got added within the last 24
>  hours.
>
>  I don't mind fixing the regression or two during the merge
>  window but THIS IS ABSOLUTELY, FUCKING, REDICULIOUS!
>
>  The tree breaks every day, and it's becomming an extremely
>  non-fun environment to work in.
>
>  We need to slow down the merging, we need to review things
>  more, we need people to test their fucking changes!
>  --

Just some comments:

Analogous to that of the football team, everyone has an impt role to
play.   And u better let go of the ball as fast as u can, otherwise u
are going to tire yourself out easily.

So, in a development team, if u think there is some unequal
distribution of workload, make noise.   Or think of some means to do
automatic loading of workload - specifically in the area of change
review.   (At other times, it is not easily to pass the load
around.....eg, if the bug happened only on your machines and not on
others?)

1.   Generally, the more people reviewed the work, the higher chances
the piece of work is ok.

2.   If more variation of real-testing is done, the better.
"variation" here means testing by users of different background
skills, different applications running, and most impt - is the base
kernel version where the patch is applied and tested. etc.

3.   Based on the two numbers above alone, we can immediately have
some measure of confidence of the patch - correct?

4.   So if we can put all these in a web page - the patches itself,
the reviewers/testers that have worked on it.

When someone comes in and review, review counter increase by one.   Or
tester counter increased by one after testing.

And I supposed everyone will attempt to cover those that are lesser
covered by others - automatic loading of workload done in a
distributed manner.

Avoid having to fill in too much information though...u will
discourage taking up the work, and let the participant spent more
precious time on reviewing instead.

So prior to consolidation of sources, just by looking at the numbers,
u can see how successful the consolidation will be.   If it is lesser
tested, then avoid including it for consoldating......

Please comments......
-- 
Regards,
Peter Teoh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ