[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250730154038.50e2a027@gandalf.local.home>
Date: Wed, 30 Jul 2025 15:40:38 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: "Theodore Ts'o" <tytso@....edu>
Cc: Al Viro <viro@...iv.linux.org.uk>, Sasha Levin <sashal@...nel.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Greg KH <greg@...ah.com>,
corbet@....net, linux-doc@...r.kernel.org, workflows@...r.kernel.org,
josh@...htriplett.org, kees@...nel.org, konstantin@...uxfoundation.org,
linux-kernel@...r.kernel.org, Linus Torvalds
<torvalds@...ux-foundation.org>, "Dr. David Alan Gilbert"
<linux@...blig.org>
Subject: Re: [PATCH 0/4] Add agent coding assistant configuration to Linux
kernel
On Wed, 30 Jul 2025 15:10:33 -0400
"Theodore Ts'o" <tytso@....edu> wrote:
> Any tool can be a force multipler, either for good or for ill.
>
> For example, I suspect we have a much greater set of problems from
> $TOOL's other than Large Language Models. For example people who use
> "git grep strcpy" and send patches (because strcpy is eeeevil), some
> of which don't even compile, and some of which are just plain wrong.
> Ditto people who take a syzbot reproducer, make some change which
> makes the problem go away, and then submit a patch, and only for
> maintainers to point ut that the patch introduced bugs and/or really
> didn't fix the problem.
>
> I don't think that we should therefore forbid any use of patches
> generated using the assistance of "git grep" or syzbot. That's
> because I view this as a problem of the people using the tool, not the
> tool itself. It's just that AI / LLM have been become a Boogeyman
> that inspires a lot of fear and loathing.
I think some of the fear is that when a new tool becomes available, that a
bunch of patch monkeys start sending "fixes" to the maintainers because
said tool said so. There's been times I had to ask for the cocci scripts to
be changed because too many people were flagging so called issues in my code
that were more of guidelines and caused no real bugs.
LLMs are now a huge new feature that many companies (including ours) is
highly encouraging their engineers to start using. I can see when someone
gets comfortable with the LLM code that is produced, they then start
pointing their attention on us. Kernel code has a lot more subtleties than
other code (like stack constraints, interrupts, etc) that an AI may not be
aware of.
This might just be paranoia, but we want to be prepared if it does happen.
-- Steve
Powered by blists - more mailing lists