lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 Apr 2020 17:06:46 +0100
From:   Edward Cree <ecree@...arflare.com>
To:     Sasha Levin <sashal@...nel.org>
CC:     Or Gerlitz <gerlitz.or@...il.com>,
        Greg KH <gregkh@...uxfoundation.org>,
        Jakub Kicinski <kuba@...nel.org>,
        Stable <stable@...r.kernel.org>,
        "Linux Netdev List" <netdev@...r.kernel.org>,
        Saeed Mahameed <saeedm@...lanox.com>,
        David Miller <davem@...emloft.net>
Subject: Re: [PATCH AUTOSEL 4.9 09/26] net/mlx5e: Init ethtool steering for
 representors

On 16/04/2020 01:00, Sasha Levin wrote:
> I'd maybe point out that the selection process is based on a neural
> network which knows about the existence of a Fixes tag in a commit.
>
> It does exactly what you're describing, but also taking a bunch more
> factors into it's desicion process ("panic"? "oops"? "overflow"? etc).
Yeah, that's why I found it odd that you were responding in a way that
 _looked like_ classic confusion of P(A|B) and P(B|A).  I just wanted
 to make sure we had that common ground before launching into a long
 Bayesian explanation.

So, let's go:
Let's imagine that 10% of all commits are stable-worthy, and we have a
 threshold that says we autosel a patch if we think there's better than
 50% chance that it's stable-worthy.  Then 50% of stable-worthy commits
 have a Fixes: tag (whose referent exists in the stable tree already),
 whereas some unknown fraction, let's say 5%, of non-stable-worthy
 commits have a Fixes: tag.
Then P(S|F) = P(F|S)P(S) / (P(F|S)P(S) + P(F|¬S)P(¬S))
            = 0.05 / (0.05 + 0.045) = 0.526...
That is, a patch with a Fixes: tag is 52.6% likely to be stable-worthy,
 *not* 50%.  (The disparity would be bigger if P(F|¬S) were smaller;
 conversely, if P(F|¬S) were larger, P(S|F) could be _less than_ 50%.)
But also, P(S|¬F) = P(¬F|S)P(S) / (P(¬F|S)P(S) + P(¬F|¬S)P(¬S))
                  = 0.05 / (0.05 + 0.855) = 0.055...
That is, a patch without a Fixes: tag is only 5.5% likely to be stable-
 worthy, which is *less* than the 10% base rate for all patches.  So
 now you need to get *more* of the positive evidence (panic/oops/overflow
 etc.) before you get pushed over the 40% threshold.
Thus "increase the amount of countervailing evidence needed".

> most fixes in -stable *don't* have a fixes tag. Shouldn't
> your argument be the opposite? If a patch has a fixes tag, it's probably
> not a fix?
I hope it's now clear that this statement confuses P(S|F) with P(F|S).

> Let me put my Microsoft employee hat on here. We have driver/net/hyperv/
> which definitely wasn't getting all the fixes it should have been
> getting without AUTOSEL.
>
> While net/ is doing great, drivers/net/ is not. If it's indeed following
> the same rules then we need to talk about how we get done right.
>
> I really have no objection to not looking in drivers/net/, it's just
> that the experience I had with the process suggests that it's not
> following the same process as net/.
Again, I'm not saying "don't look in drivers/net/", I'm saying increase
 the probability threshold there: because _some_ of the stable candidates
 have already been picked up by our process, the pickings in what's left
 are thinner, i.e. the base rate P(S) is lower, so you need _more_
 evidence before deciding to autosel something.  (I don't know exactly
 how your NN is set up; is it able to use information like "is in
 drivers/net/" as an input node?)  Part of the trouble is that the NN is
 trained on "did this go to stable eventually", whereas being in
 drivers/net/ is (on this theory) only a signal in the case where it
 didn't go to stable _initially_ and had to be caught later; is that
 information also present in your training data?  The NN would only be
 expected to learn about drivers/net/ for itself if that were the case,
 otherwise it would have no way of knowing about the lowered base rate.
Conversely, if it *did* have that information (was this sent to stable
 by maintainer's own processes, or was it found later to have been
 missed) in the training data, it could learn these things by itself and
 there'd be no need to do anything special for drivers/net/ (or, arguably,
 even for net/).

> How come? DaveM is specifically asking not to add stable tags because he
> will do the selection himself, right?
Driver maintainers sending patch series to Dave often include in the
 cover letter "please consider patches 4, 7, 8 for stable".  It's *directly*
 CCing stable on patch submissions that Dave asks people not to do.
And it sounds from your Microsoft-hat like the HyperV maintainers might be
 under that same misapprehension, if their stuff isn't making it to stable
 as much as it should be.  But I haven't checked.

-ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ