lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/01z4EJNfioId1d@casper.infradead.org>
Date:   Mon, 27 Feb 2023 22:59:27 +0000
From:   Matthew Wilcox <willy@...radead.org>
To:     Sasha Levin <sashal@...nel.org>
Cc:     Eric Biggers <ebiggers@...nel.org>, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org, viro@...iv.linux.org.uk,
        linux-fsdevel@...r.kernel.org
Subject: Re: AUTOSEL process

On Mon, Feb 27, 2023 at 05:35:30PM -0500, Sasha Levin wrote:
> On Mon, Feb 27, 2023 at 09:38:46PM +0000, Eric Biggers wrote:
> > Just because you can't be 100% certain whether a commit is a fix doesn't mean
> > you should be rushing to backport random commits that have no indications they
> > are fixing anything.
> 
> The difference in opinion here is that I don't think it's rushing: the
> stable kernel rules say a commit must be in a released kernel, while the
> AUTOSEL timelines make it so a commit must have been in two released
> kernels.

Patches in -rc1 have been in _no_ released kernels.  I'd feel a lot
better about AUTOSEL if it didn't pick up changes until, say, -rc4,
unless they were cc'd to stable.

> > Nothing has changed, but that doesn't mean that your process is actually
> > working.  7 days might be appropriate for something that looks like a security
> > fix, but not for a random commit with no indications it is fixing anything.
> 
> How do we know if this is working or not though? How do you quantify the
> amount of useful commits?

Sasha, 7 days is too short.  People have to be allowed to take holiday.

> I'd love to improve the process, but for that we need to figure out
> criteria for what we consider good or bad, collect data, and make
> decisions based on that data.
> 
> What I'm getting from this thread is a few anecdotal examples and
> statements that the process isn't working at all.
> 
> I took Jon's stablefixes script which he used for his previous articles
> around stable kernel regressions (here:
> https://lwn.net/Articles/812231/) and tried running it on the 5.15
> stable tree (just a random pick). I've proceeded with ignoring the
> non-user-visible regressions as Jon defined in his article (basically
> issues that were introduced and fixed in the same releases) and ended up
> with 604 commits that caused a user visible regression.
> 
> Out of those 604 commits:
> 
>  - 170 had an explicit stable tag.
>  - 434 did not have a stable tag.

I think a lot of people don't realise they have to _both_ put a Fixes
tag _and_ add a Cc: stable.  How many of those 604 commits had a Fixes
tag?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ