[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LRH.2.00.2508050025060.22517@gjva.wvxbf.pm>
Date: Tue, 5 Aug 2025 00:30:53 +0200 (CEST)
From: Jiri Kosina <kosina@...il.com>
To: Steven Rostedt <rostedt@...dmis.org>
cc: Sasha Levin <sashal@...nel.org>, Michal Hocko <mhocko@...e.com>,
David Hildenbrand <david@...hat.com>, Greg KH <gregkh@...uxfoundation.org>,
Vlastimil Babka <vbabka@...e.cz>, corbet@....net,
linux-doc@...r.kernel.org, workflows@...r.kernel.org,
josh@...htriplett.org, kees@...nel.org, konstantin@...uxfoundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] Add agent coding assistant configuration to Linux
kernel
On Mon, 4 Aug 2025, Steven Rostedt wrote:
> I know we can't change the DCO, but could we add something about our policy
> is that if you submit code, you certify that you understand said code, even
> if (especially) it was produced by AI?
Yeah, I think that's *precisely* what's needed.
Legal stuff is one thing. Let's assume for now that it's handled by the LF
statement, DCO, whatever.
But "if I need to talk to a human that has a real clue about this code
change, who is that?" absolutely (in my view) needs to be reflected in the
changelog metadata. Because the more you challenge LLMs, the more they
will hallucinate.
If for nothing else, then for accountability (not legal, but factual). LLM
is never going to be responsible for the generated code in the
"human-to-human" sense.
AI can assist, but a human needs to be the one proxying the responsibility
(if he/she decides to do so), with all the consequences (again, not
talking legal here at all).
Thanks,
--
Jiri Kosina
SUSE Labs
Powered by blists - more mailing lists