[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250729021842.521c757f@foz.lan>
Date: Tue, 29 Jul 2025 02:18:42 +0200
From: Mauro Carvalho Chehab <mchehab+huawei@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: <dan.j.williams@...el.com>, Jakub Kicinski <kuba@...nel.org>, Sasha
Levin <sashal@...nel.org>, <workflows@...r.kernel.org>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<kees@...nel.org>, <konstantin@...uxfoundation.org>, <corbet@....net>,
<josh@...htriplett.org>
Subject: Re: [RFC 0/2] Add AI coding assistant configuration to Linux kernel
Em Tue, 29 Jul 2025 00:12:33 +0200
Mauro Carvalho Chehab <mchehab+huawei@...nel.org> escreveu:
> Em Mon, 28 Jul 2025 13:46:53 -0400
> Steven Rostedt <rostedt@...dmis.org> escreveu:
>
> > On Fri, 25 Jul 2025 13:34:32 -0700
> > <dan.j.williams@...el.com> wrote:
> >
> > > > This touches on explainability of AI. Perhaps the metadata would be
> > > > interesting for XAI research... not sure that's enough to be lugging
> > > > those tags in git history.
> > >
> > > Agree. The "who to blame" is "Author:". They signed DCO they are
> > > responsible for debugging what went wrong in any stage of the
> > > development of a patch per usual. We have a long history of debugging
> > > tool problems without tracking tool versions in git history.
> >
> > My point of the "who to blame" was not about the author of said code,
> > but if two or more developers are using the same AI agent and then some
> > patter of bugs appears that is only with that AI agent, then we know
> > that the AI agent is likely the culprit and to look for code by other
> > developers that used that same AI agent.
> >
> > It's a way to track down a bug in a tool that is creating code, not
> > about moving blame from a developer to the agent itself.
>
> I don't think you shall blame the tool, as much as you you cannot
> blame gcc for a badly written code. Also, the same way a kernel
> maintainer needs to know how to produce a good code, someone using
> AI also must learn how to properly use the tool.
>
> After all, at least at the current stage, AI is not intelligent.
Heh, after re-reading my post, I realized that I could have been too
technical, specially for people not familiar with electrical engineering
and systems control theory(*).
What I'm trying to say is that, while AI is a great tool, it is just
another tool that tries to guess something. If you get enough luck,
you'll get decent results, but one should never trust on its result,
as it is based on statistics: it will guess an answer that will likely
be the right one, but could also be completely off.
(*) systems control theory is a field that studies a system stability.
It can be used, for instance, to ensure that an electrical motor
can be properly controlled and provide precise movements. It is
not limited to mechanics, though. It can used to explain other
systems that have any sort of feedbacks. at the light of the
control theory, an AI training would be mapped as a feedback.
Regards,
Mauro
Thanks,
Mauro
Powered by blists - more mailing lists