lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260126185419.626ba56e@kernel.org>
Date: Mon, 26 Jan 2026 18:54:19 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: "Russell King (Oracle)" <linux@...linux.org.uk>
Cc: Andrew Lunn <andrew@...n.ch>, Alexandre Torgue
 <alexandre.torgue@...s.st.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David
 S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Heiko
 Stuebner <heiko@...ech.de>, linux-arm-kernel@...ts.infradead.org,
 linux-rockchip@...ts.infradead.org,
 linux-stm32@...md-mailman.stormreply.com, netdev@...r.kernel.org, Paolo
 Abeni <pabeni@...hat.com>
Subject: Re: [PATCH net-next v2 06/22] net: stmmac: rk: add SoC specific
 ->init() method

On Tue, 27 Jan 2026 01:55:29 +0000 Russell King (Oracle) wrote:
> On Mon, Jan 26, 2026 at 05:16:06PM -0800, Jakub Kicinski wrote:
> > On Tue, 27 Jan 2026 00:59:05 +0000 Russell King (Oracle) wrote:  
> > > This sounds like my contributions to netdev aren't valued, and if that's
> > > the case, I will stop.  
> > 
> > Quite the opposite, what I'm saying is that your complaints make me
> > feel like the weekends spent on trying to make this project come out 
> > of stone age testing-wise are not appreciated. Of course your
> > contributions are appreciated.
> > 
> > The AI code reviews on existing buggy code are indeed very painful.
> > Not sure what we can do here to make the contributing easier.
> > It costs us around $2 now to review a single patch so we can't afford
> > public access. I think Google is working on making Gemini code reviews
> > public and free, hopefully that materializes.  
> 
> For a series of this size and complexity, the AI reviews are valued
> because it's finding real issues that I can't test for.
> 
> The big problem is that the AI only finds one issue with a patch, not
> all the issues. So, it's going to take multiple submissions to get to
> a point where the AI review of this series is clean.
> 
> I suspect the problem with "AI only finds one issue" is that the AI
> systems aren't advanced enough to do anything else yet.

Yes, looking at its "reasoning" output it both goes down different
investigation paths each time but more importantly it runs out of
tokens at some point, so it won't cover all the same paths each time.

> So, do I continue fixing the AI issues each day and resubmitting a new
> version of this series each day this week, costing $44 each time?

I think so.. I don't want to change our process because of AI, but 
some ways to save cost rhyme with our normal recommendations.
Keep the series under 15 patches. Split the series up, and extract
trivial patches out so that they can be applied and not reposted.

> Do we reach a point where it gets merged even though the AI review
> still has issues?

Whether the comment comes from AI is secondary, so it's just a question
of whether we merge code knowing that it has issues. Rarely, I guess.

> These are honest questions... and if they haven't been considered, I
> think they need to be, because I can see this series becoming very
> expensive.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ