lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 Apr 2020 19:23:02 -0400
From:   Sasha Levin <sashal@...nel.org>
To:     Saeed Mahameed <saeedm@...lanox.com>
Cc:     "ecree@...arflare.com" <ecree@...arflare.com>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "gerlitz.or@...il.com" <gerlitz.or@...il.com>,
        "davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH AUTOSEL 4.9 09/26] net/mlx5e: Init ethtool steering for
 representors

On Thu, Apr 16, 2020 at 09:32:47PM +0000, Saeed Mahameed wrote:
>On Thu, 2020-04-16 at 15:53 -0400, Sasha Levin wrote:
>> If we agree so far, then why do you assume that the same people who
>> do
>> the above also perfectly tag their commits, and do perfect selection
>> of
>> patches for stable? "I'm always right except when I'm wrong".
>
>I am welling to accept people making mistakes, but not the AI..

This is where we disagree. If I can have an AI that performs on par with
an "average" kernel engineer - I'm happy with it.

The way I see AUTOSEL now is an "average" kernel engineer who does patch
sorting for me to review.

Given I review everything that it spits out at me, it's technically a
human error (mine), rather than a problem with the AI, right?

>if it is necessary and we have a magical solution, i will write good AI
>with no false positives to fix or help avoid memleacks.

Easier said than done :)

I think that the "Intelligence" in AI suggests that it can be making
mistakes.

>BUT if i can't achieve 100% success rate, and i might end up
>introducing memleack with my AI, then I wouldn't use AI at all.
>
>We have different views on things.. if i know AI is using kmalloc
>wrongly, I fix it, end of story :).
>
>fact: Your AI is broken, can introduce _new_ un-called for bugs, even
>it is very very very good 99.99% of the cases.

People are broken too, they introduce new bugs, so why are we accepting
new commits into the kernel?

My point is that everything is broken, you can't have 100% perfect
anything.

>> Here's my suggestion: give us a test rig we can run our stable
>> release
>> candidates through. Something that simulates "real" load that
>> customers
>> are using. We promise that we won't release a stable kernel if your
>> tests are failing.
>>
>
>I will be more than glad to do so, is there a formal process for such
>thing ?

I'd love to work with you on this if you're interested. There are a few
options:

1. Send us a mail when you detect a push to a stable-rc branch. Most
people/bots reply to Greg's announce mail with pass/fail.

2. Integrate your tests into kernelci (kernelci.org) - this means that
you'll run a "lab" on prem, and kernelci will schedule builds and tests
on it's own, sending reports to us.

3. We're open to other solutions if you had something in mind, the first
two usually work for people but if you have a different requirement
we'll be happy to figure it out.

-- 
Thanks,
Sasha

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ