[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <12da718975b8a7d590c3f8d59242ddae946a114f.camel@mellanox.com>
Date: Tue, 21 Apr 2020 03:07:55 +0000
From: Saeed Mahameed <saeedm@...lanox.com>
To: "sashal@...nel.org" <sashal@...nel.org>
CC: "ecree@...arflare.com" <ecree@...arflare.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
"kuba@...nel.org" <kuba@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"gerlitz.or@...il.com" <gerlitz.or@...il.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH AUTOSEL 4.9 09/26] net/mlx5e: Init ethtool steering for
representors
On Thu, 2020-04-16 at 19:23 -0400, Sasha Levin wrote:
> On Thu, Apr 16, 2020 at 09:32:47PM +0000, Saeed Mahameed wrote:
> > On Thu, 2020-04-16 at 15:53 -0400, Sasha Levin wrote:
> > > If we agree so far, then why do you assume that the same people
> > > who
> > > do
> > > the above also perfectly tag their commits, and do perfect
> > > selection
> > > of
> > > patches for stable? "I'm always right except when I'm wrong".
> >
> > I am welling to accept people making mistakes, but not the AI..
>
> This is where we disagree. If I can have an AI that performs on par
> with
> an "average" kernel engineer - I'm happy with it.
>
> The way I see AUTOSEL now is an "average" kernel engineer who does
> patch
> sorting for me to review.
>
> Given I review everything that it spits out at me, it's technically a
> human error (mine), rather than a problem with the AI, right?
>
> > if it is necessary and we have a magical solution, i will write
> > good AI
> > with no false positives to fix or help avoid memleacks.
>
> Easier said than done :)
>
> I think that the "Intelligence" in AI suggests that it can be making
> mistakes.
>
> > BUT if i can't achieve 100% success rate, and i might end up
> > introducing memleack with my AI, then I wouldn't use AI at all.
> >
> > We have different views on things.. if i know AI is using kmalloc
> > wrongly, I fix it, end of story :).
> >
> > fact: Your AI is broken, can introduce _new_ un-called for bugs,
> > even
> > it is very very very good 99.99% of the cases.
>
> People are broken too, they introduce new bugs, so why are we
> accepting
> new commits into the kernel?
>
> My point is that everything is broken, you can't have 100% perfect
> anything.
>
> > > Here's my suggestion: give us a test rig we can run our stable
> > > release
> > > candidates through. Something that simulates "real" load that
> > > customers
> > > are using. We promise that we won't release a stable kernel if
> > > your
> > > tests are failing.
> > >
> >
> > I will be more than glad to do so, is there a formal process for
> > such
> > thing ?
>
> I'd love to work with you on this if you're interested. There are a
> few
> options:
>
> 1. Send us a mail when you detect a push to a stable-rc branch. Most
> people/bots reply to Greg's announce mail with pass/fail.
Sounds like our best option for now, as we already have our own testing
infra that knows how to watch for external changes in mailing lists.
>
> 2. Integrate your tests into kernelci (kernelci.org) - this means
> that
> you'll run a "lab" on prem, and kernelci will schedule builds and
> tests
> on it's own, sending reports to us.
>
> 3. We're open to other solutions if you had something in mind, the
> first
> two usually work for people but if you have a different requirement
> we'll be happy to figure it out.
>
Thanks,
I will have to discuss this with our CI maintainers and see what we
prefer.
i will let you know.
Powered by blists - more mailing lists