lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Jun 2017 19:02:20 +0200
From:   Mike Galbraith <umgwanakikbuti@...il.com>
To:     Josef Bacik <josef@...icpanda.com>,
        Joel Fernandes <joelaf@...gle.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Juri Lelli <Juri.Lelli@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Brendan Jackman <brendan.jackman@....com>,
        Chris Redpath <Chris.Redpath@....com>
Subject: Re: wake_wide mechanism clarification

On Fri, 2017-06-30 at 10:28 -0400, Josef Bacik wrote:
> On Thu, Jun 29, 2017 at 08:04:59PM -0700, Joel Fernandes wrote:
> 
> > That makes sense that we multiply slave's flips by a factor because
> > its low, but I still didn't get why the factor is chosen to be
> > llc_size instead of something else for the multiplication with slave
> > (slave * factor).

> Yeah I don't know why llc_size was chosen...

static void update_top_cache_domain(int cpu)
{
        struct sched_domain_shared *sds = NULL;
        struct sched_domain *sd;
        int id = cpu;
        int size = 1;

        sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES);
        if (sd) {
                id = cpumask_first(sched_domain_span(sd));
                size = cpumask_weight(sched_domain_span(sd));
                sds = sd->shared;
        }

        rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
        per_cpu(sd_llc_size, cpu) = size;

The goal of wake wide was to approximate when pulling would be a futile
consolidation effort and counterproductive to scaling.  'course with
ever increasing socket size, any 1:N waker is ever more likely to run
out of CPU for its one and only self (slamming into scaling wall)
before it needing to turn its minions loose to conquer the world.

Something else to consider: network interrupt waking multiple workers
at high frequency.  If the waking CPU is idle, do you really want to
place a worker directly in front of a tattoo artist, or is it better
off nearly anywhere but there?

If the box is virtual, with no topology exposed (or real but ancient)
to let select_idle_sibling() come to the rescue, two workers can even
get tattooed simultaneously (see sync wakeup). 

	-Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ