[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e5b8889-bf3a-4436-a99d-0396081e65e0@lucifer.local>
Date: Thu, 8 Jan 2026 10:32:39 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: James Bottomley <James.Bottomley@...senpartnership.com>
Cc: Dave Hansen <dave@...1.net>, Dave Hansen <dave.hansen@...ux.intel.com>,
linux-kernel@...r.kernel.org, Shuah Khan <shuah@...nel.org>,
Kees Cook <kees@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Miguel Ojeda <ojeda@...nel.org>, Luis Chamberlain <mcgrof@...nel.org>,
SeongJae Park <sj@...nel.org>, Dan Williams <dan.j.williams@...el.com>,
Steven Rostedt <rostedt@...dmis.org>, NeilBrown <neilb@...mail.net>,
Theodore Ts'o <tytso@....edu>, Sasha Levin <sashal@...nel.org>,
Jonathan Corbet <corbet@....net>, Vlastimil Babka <vbabka@...e.cz>,
workflows@...r.kernel.org, ksummit@...ts.linux.dev
Subject: Re: [PATCH] [v3] Documentation: Provide guidelines for
tool-generated content
On Wed, Jan 07, 2026 at 05:39:48PM -0500, James Bottomley wrote:
> On Wed, 2026-01-07 at 21:15 +0000, Lorenzo Stoakes wrote:
> > On Wed, Jan 07, 2026 at 11:18:52AM -0800, Dave Hansen wrote:
> > > On 1/7/26 10:12, Lorenzo Stoakes wrote:
> > > ...
> > > > I know Linus had the cute interpretation of it 'just being
> > > > another tool' but never before have people been able to do this.
> > >
> > > I respect your position here. But I'm not sure how to reconcile:
> > >
> > > LLMs are just another tool
> > > and
> > > LLMs are not just another tool
> > >
> > > :)
> >
> > Well I'm not asking you to reconcile that, I'm providing my point of
> > view which disagrees with the first position and makes a case for the
> > second. Isn't review about feedback both positive and negative?
> >
> > Obviously if this was intended to simply inform the community of the
> > committee's decision then apologies for misinterpreting it.
> >
> > I would simply argue that LLMs are not another tool on the basis of
> > the drastic negative impact its had in very many areas, for which you
> > need only take a cursory glance at the world to observe.
> >
> > Thinking LLMs are 'just another tool' is to say effectively that the
> > kernel is immune from this. Which seems to me a silly position.
>
> All tools are double edged and the better a tool is the more
> problematic its harmful uses become but people often use them anyway
> because of the beneficial uses. You don't for instance classify
> chainsaws as not another tool because they can be used to deforest the
> Amazon. All the document is saying is that we start from the place of
> treating AI like any other tool and, like any other tool, if it proves
> to cause way more problems than it solves, then we can then move on to
> other things. There are other tools we've tried and abandoned (like
> compiling the kernel with c++), so this really isn't any different.
I mean using the same analogy I'd say the existing norms are designed for
spoons, you'd probably not want to apply those same to a chainsaw :)
>
> Regards,
>
> James
>
Powered by blists - more mailing lists