lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250731020226.3d008bcb@foz.lan>
Date: Thu, 31 Jul 2025 02:02:26 +0200
From: Mauro Carvalho Chehab <mchehab+huawei@...nel.org>
To: Sasha Levin <sashal@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>, Lorenzo Stoakes
 <lorenzo.stoakes@...cle.com>, Greg KH <greg@...ah.com>, corbet@....net,
 linux-doc@...r.kernel.org, workflows@...r.kernel.org,
 josh@...htriplett.org, kees@...nel.org, konstantin@...uxfoundation.org,
 linux-kernel@...r.kernel.org, Linus Torvalds
 <torvalds@...ux-foundation.org>, "Dr. David Alan Gilbert"
 <linux@...blig.org>
Subject: Re: [PATCH 0/4] Add agent coding assistant configuration to Linux
 kernel

Em Wed, 30 Jul 2025 13:46:47 -0400
Sasha Levin <sashal@...nel.org> escreveu:

> >> Some sort of a "traffic light" system:
> >>
> >>   1. Green: the subsystem is happy to receive patches from any source.
> >>
> >>   2. Yellow: "If you're unfamiliar with the subsystem and using any
> >>   tooling to generate your patches, please have a reviewed-by from a
> >>   trusted developer before sending your patch".
> >>
> >>   3. No tool-generated patches without prior maintainer approval.  
> >

That sounds a terrible idea. I mean, maintainers should be green for good
patches and red for bad ones. It doesn't matter if they're aided or
generated by AI or $TOOL. At the end, the one submitting it shall be able
to properly understand, describe and debug it. It shall also be able to
test it in real life before submitting.

AI can do good things, but can also do bad things. I'd say that anyone
using it shall double-check the code at least twice, checking if are
there any hidden bugs.

I've been doing myself some experiments: sometimes, LLM can quickly point
something broken, doing root cause analysis, completing a TODO requirement
and even write unittests and code.

However, sometimes, AI starts to "allucinate"(*), pointing to things that
don't exist, like inventing fields on structures and command line
arguments that don't exist (it likely inferred the names from projects 
could be using similar patterns/goals).

(*) AI being an statistics tool, the correct term is to diverge.

> >Perhaps. Of course there's the Coccinelle scripts that fix a bunch of code
> >around the kernel that will like be ignored in this. But this may still be
> >a good start.  

This is something that maintainers don't want: yet-another-tool that
newbies wanting to have their one microsecond of fame by getting patches
merged to start sending stuff that weren't tested nor bring any value.
Maybe we can add a text about that.

Thanks,
Mauro

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ