lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f5b0227-dbf6-4294-8532-525b3e405dc2@linux-m68k.org>
Date:   Sat, 1 Jul 2023 11:46:18 +1000 (AEST)
From:   Finn Thain <fthain@...ux-m68k.org>
To:     Steven Rostedt <rostedt@...dmis.org>
cc:     Theodore Ts'o <tytso@....edu>, linux-doc@...r.kernel.org,
        tech-board-discuss@...ts.linux-foundation.org,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-kernel@...r.kernel.org
Subject: Measurement, was Re: [Tech-board-discuss] [PATCH] Documentation:
 Linux Contribution Maturity Model and the wider community


On Wed, 21 Jun 2023, Steven Rostedt wrote:

> 
> If your point is mainly the second part of that paragraph, which is to 
> tie in metrics to reflect maintainer effectiveness, then I think I agree 
> with you there. One metric is simply the time a patch is ignored by a 
> maintainer on a mailing list (where the maintainer is Cc'd and it is 
> obvious the patch belongs to their subsystem). I know I fail at that, 
> especially when my work is pushing me to focus on other things.
> 

A useful metric when pushing for a higher patch rate is the rework rate.

I have found that 'Fixes' tags can be used to quantify this. I don't have 
scripts to do so but others probably do. (My purpose at the time was to 
quantify my own rework rate by counting my own commit hashes when they 
appeared in subsequent 'Fixes' tags.) Note that a low 'Fixes' count could 
indicate inadequate bug reporting processes so additional metrics may be 
needed.

Where the practices relating to 'Fixes' tagging and bug reporting are 
uniform across subsystems, it might be possible to compare the diverse 
processes and methodologies presently in use.

BTW. I assume that 'Fixes' tags are already being used to train AI models 
to locate bugs in existing code. If this could be used to evaluate new 
patches when posted, it might make the code review process more efficient.

The same approach could probably be generalized somewhat. For example, a 
'Modernizes' tag might be used to train an AI model to target design 
patterns that are being actively replaced anywhere in the code base.

The real pay-off from this kind of automation is that an improvement made 
by any reviewer gets amplified so as to reach across many subsystems and 
mailing lists -- but only when the automation gets scaled up and widely 
deployed. We already see this effect with Coccinelle semantic patches.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ