[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aJFCoewqTIXlhnJk@lappy>
Date: Mon, 4 Aug 2025 19:30:41 -0400
From: Sasha Levin <sashal@...nel.org>
To: dan.j.williams@...el.com
Cc: Steven Rostedt <rostedt@...dmis.org>, Jiri Kosina <kosina@...il.com>,
Michal Hocko <mhocko@...e.com>,
David Hildenbrand <david@...hat.com>,
Greg KH <gregkh@...uxfoundation.org>,
Vlastimil Babka <vbabka@...e.cz>, corbet@....net,
linux-doc@...r.kernel.org, workflows@...r.kernel.org,
josh@...htriplett.org, kees@...nel.org,
konstantin@...uxfoundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] Add agent coding assistant configuration to Linux
kernel
On Mon, Aug 04, 2025 at 03:53:50PM -0700, dan.j.williams@...el.com wrote:
>Steven Rostedt wrote:
>> On Tue, 5 Aug 2025 00:03:29 +0200 (CEST)
>> Jiri Kosina <kosina@...il.com> wrote:
>>
>> > Al made a very important point somewhere earlier in this thread.
>> >
>> > The most important (from the code quality POV) thing is -- is there a
>> > person that understands the patch enough to be able to answer questions
>> > (coming from some other human -- most likely reviewer/maintainer)?
>> >
>> > That's not something that'd be reflected in DCO, but it's very important
>> > fact for the maintainer's decision process.
>>
>> Perhaps this is what needs to be explicitly stated in the SubmittingPatches
>> document.
>>
>> I know we can't change the DCO, but could we add something about our policy
>> is that if you submit code, you certify that you understand said code, even
>> if (especially) it was produced by AI?
>
>It is already the case that human developed code is not always
>understood by the submitter (i.e. bugs, or see occasions of "no
>functional changes intended" commits referenced by "Fixes:"). It is also
>already the case that the speed at which code is applied has a component
>of maintainer's trust in the submitter to stick around and address
>issues or work with the community.
>
>AI allows production of plausible code in higher volumes, but it does
>not fundamentally change the existing dynamic of development velocity vs
>trust.
Right: I think that the issue Jiri brought up is a human problem, not a
tooling problem.
We can try and tackle a symptom, but it's a losing war.
>So an expectation that is worth clarifying is that mere appearance of
>technical correctness is not sufficient to move a proposal forward. The
>details of what constitutes sufficient trust are subsystem, maintainer,
>or even per-function specific. This is a nuanced expectation that human
>submitters struggle, let alone AI.
>
>"Be prepared to declare a confidence interval in every detail of a patch
>series, especially any AI generated pieces."
Something along the lines of a Social Credit system for the humans
behind the keyboard? :)
Do we want to get there? Do we not?
--
Thanks,
Sasha
Powered by blists - more mailing lists