lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aUrVzuMz5D9QYF4O@sirena.co.uk>
Date: Tue, 23 Dec 2025 17:47:58 +0000
From: Mark Brown <broonie@...nel.org>
To: Sasha Levin <sashal@...nel.org>
Cc: tools@...nel.org, linux-kernel@...r.kernel.org,
	torvalds@...ux-foundation.org, sfr@...b.auug.org.au
Subject: Re: [RFC 0/5] LLMinus: LLM-Assisted Merge Conflict Resolution

On Tue, Dec 23, 2025 at 07:36:18AM -0500, Sasha Levin wrote:
> On Mon, Dec 22, 2025 at 02:50:55PM +0000, Mark Brown wrote:
> > On Sun, Dec 21, 2025 at 11:10:11AM -0500, Sasha Levin wrote:

> > clear who would want the various intermediate merges either, I suppose
> > that having some of the trees pulled into multiple places might help
> > shake out some of the issues due to things getting sent to Linus in a
> > different order but OTOH it will increase the total number of merges
> > done and tested which is itself a cost.  We could also shake out
> > ordering issues by doing something like randomise the ordering.  I think
> > I'd want some demand or use case for doing more intermediate merges
> > rather than just doing a bunch of them for the sake of it.

> My thinking around it was to enable faster per-subsystem tests than what we
> currently do. For example, we can quickly build mm-next and run mm focused
> tests on it.

If we start putting everything into intermediate merges then inevitably
some of those merges are going to be later in the process and will get
generated later in the process, meaning they're nearer to the production
of the full -next.  I'm also not clear that we have enough trees that
would update multiple times a day.

> Since creating these per-subsystem trees is fairly cheap and can happen even
> few times a day, we can help identify issues way earlier during the process.

To be clear unless things are super prone to conflicts the big cost with
adding stuff to -next isn't generally doing the merges, it's build
testing the results.  To that end the main potential advantage I can see
in doing submerges would be if we could parallelise the build testing
portion of things.  That would need some consideration of the complexity
of the scripting, the build machines and the cogantive load involved,
and if we were doing that the considerations for constructing submerges
would be a bit different.  It has crossed my mind, but it'd be non
trivial to do and not intending to produce intermediate merges that are
useful to anyone else.

> > This seems like a very separate experiment to your LLM merge thing.

> Right, just going off on a tangent based on the Maintainer's summit feedback of
> how useful fs-next is.

A key part of this is that the filesystem people had the need, capacity
and desire to test a specific merge.  It's not that the merge started
happening then the filesystem people saw it and realised that it'd be
really useful, they wanted and asked for the merge because it filled a
specific need they had identified.  If there's other situations like
that that's a very different, much more clearly valuable, prospect than
producing intermediate merges and hoping they're useful.

With my testing hat on there's costs to adding extra trees to test, and
with producing those trees more often.  You need capacity to both run
the tests and triage the results, and an audience that is going to care
about the results.  If you're adding a merged tree you generally either
want to be able to drop individual testing of the component trees or to
have some reason to believe that that specific merge is likely to be
where relevant issues are introduced.  For example the reason I
generally recomment that people doing CI cover -next as well as their
specific trees is that you can catch issues from other trees that are
going to impact your testing (eg, breaking the platforms you test)
before they end up coming into your tree via Linus' tree, keeping your
baseline stable.  With that goal you're actively looking to see as many
trees as possible integrated.

My guess would be that many areas of the kernel already have workflows
that meet whatever needs they have for integration trees and have no
need to do something centrally, if there are areas where there's a need
then by all means but I think they should be something that people
actively want.

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ