lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Jun 2017 20:49:13 -0400
From:   Josef Bacik <josef@...icpanda.com>
To:     Joel Fernandes <joelaf@...gle.com>
Cc:     Mike Galbraith <umgwanakikbuti@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Juri Lelli <Juri.Lelli@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Brendan Jackman <brendan.jackman@....com>,
        Chris Redpath <Chris.Redpath@....com>
Subject: Re: wake_wide mechanism clarification

On Thu, Jun 29, 2017 at 05:19:14PM -0700, Joel Fernandes wrote:
> Dear Mike,
> 
> I wanted your kind help to understand your patch "sched: beef up
> wake_wide()"[1] which is a modification to the original patch from
> Michael Wang [2].
> 
> In particular, I didn't following the following comment:
> " to shared cache, we look for a minimum 'flip' frequency of llc_size
> in one partner, and a factor of lls_size higher frequency in the
> other."
> 
> Why are wanting the master's flip frequency to be higher than the
> slaves by the factor?

(Responding from my personal email as my work email is outlook shit and
impossible to use)

Because we are trying to detect the case that the master is waking many
different processes, and the 'slave' processes are only waking up the
master/some other specific processes to determine if we don't care about cache
locality.

> 
> The code here is written as:
> 
> if (slave < factor || master < slave * factor)
>    return 0;
> 
> However I think we should just do (with my current and probably wrong
> understanding):
> 
> if (slave < factor || master < factor)
>     return 0;
> 

Actually I think both are wrong, but I need Mike to weigh in.  In my example
above we'd return 0, because the 'producer' will definitely have a wakee_flip of
ridiculous values, but the 'consumer' could essentially have a wakee_flip of 1,
just the master to tell it that it's done.  I _suppose_ in practice you have a
lock or something so the wakee_flip isn't going to be strictly 1, but some
significantly lower value than master.  I'm skeptical of the slave < factor
test, I think it's too high of a bar in the case where cache locality doesn't
really matter, but the master < slave * factor makes sense, as slave is going to
be orders of magnitude lower than master.

> Basically, I didn't follow why we multiply the slave's flips with
> llc_size. That makes it sound like the master has to have way more
> flips than the slave to return 0 from wake_wide. Could you maybe give
> an example to clarify? Thanks a lot for your help,
> 

It may be worth to try with schedbench and trace it to see how this turns out in
practice, as that's the workload that generated all this discussion before.  I
imagine generally speaking this works out properly.  The small regression I
reported before was at low RPS, so we wouldn't be waking up as many tasks as
often, so we would be returning 0 from wake_wide() and we'd get screwed.  This
is where I think possibly dropping the slave < factor part of the test would
address that, but I'd have to trace it to say for sure.  Thanks,

Josef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ