[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <90b0ba66-8d09-433c-a6a8-f46893db1ef7@lucifer.local>
Date: Mon, 28 Jul 2025 14:28:20 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Sasha Levin <sashal@...nel.org>
Cc: Greg KH <greg@...ah.com>, corbet@....net, linux-doc@...r.kernel.org,
workflows@...r.kernel.org, josh@...htriplett.org, kees@...nel.org,
konstantin@...uxfoundation.org, linux-kernel@...r.kernel.org,
rostedt@...dmis.org, Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 0/4] Add agent coding assistant configuration to Linux
kernel
On Mon, Jul 28, 2025 at 09:23:19AM -0400, Sasha Levin wrote:
> On Mon, Jul 28, 2025 at 02:13:01PM +0100, Lorenzo Stoakes wrote:
> > On Mon, Jul 28, 2025 at 08:45:19AM -0400, Sasha Levin wrote:
> > > > So at all times I think ensuring the human element is aware that they need
> > > > to do some kind of checking/filtering is key.
> > > >
> > > > But that can be handled by a carefully worded policy document.
> > >
> > > Right. The prupose of this series is not to create a new LLM policy but
> > > rather try and enforce our existing set of policies on LLMs.
> >
> > I get that, but as you can see from my original reply, my concern is more
> > as to the non-technical consequences of this series.
> >
> > I retain my view that we need an explicit AI policy doc first, and ideally
> > this would be tempered by input at the maintainer's summit before any of
> > this proceeds.
> >
> > I think adding anything like this before that would have unfortunate
> > unintended consequences.
> >
> > And as a maintainer who does a fair bit of review, I'm likely to be on the
> > front lines to that :)
>
> Oh, appologies, I'm not trying to push for this to be included urgently:
> if there's interest in waiting with this until after maintainer's
> summit/LPC I don't have any objection with that.
Awesome, thanks; yeah I think this is the best approach to ensure we have
our ducks in a row.
>
> My point was more that I want to get this series in a "happy" state so
> we have it available whenever we come up with a policy.
Ack!
>
> I'm thinking that no matter what we land on at the end, we'll need
> something like this patch series to try and enforce that on the LLM side
> of things.
Sure, practically speaking it's unlikely that the decision will be
'absolutely not', in which case we ought to be prepared as to how to
implement what's required.
>
> --
> Thanks,
> Sasha
Cheers, Lorenzo
Powered by blists - more mailing lists