lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 2 Feb 2018 20:58:39 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Steven Sistare <steven.sistare@...cle.com>
Cc:     subhra mazumdar <subhra.mazumdar@...cle.com>,
        linux-kernel@...r.kernel.org, mingo@...hat.com,
        dhaval.giani@...cle.com
Subject: Re: [RESEND RFC PATCH V3] sched: Improve scalability of
 select_idle_sibling using SMT balance

On Fri, Feb 02, 2018 at 12:36:47PM -0500, Steven Sistare wrote:
> On 2/2/2018 12:17 PM, Peter Zijlstra wrote:
> > On Fri, Feb 02, 2018 at 11:53:40AM -0500, Steven Sistare wrote:
> >>>> +static int select_idle_smt(struct task_struct *p, struct sched_group *sg)
> >>>>  {
> >>>> +	int i, rand_index, rand_cpu;
> >>>> +	int this_cpu = smp_processor_id();
> >>>>  
> >>>> +	rand_index = CPU_PSEUDO_RANDOM(this_cpu) % sg->group_weight;
> >>>> +	rand_cpu = sg->cp_array[rand_index];
> >>>
> >>> Right, so yuck.. I know why you need that, but that extra array and
> >>> dereference is the reason I never went there.
> >>>
> >>> How much difference does it really make vs the 'normal' wrapping search
> >>> from last CPU ?
> >>>
> >>> This really should be a separate patch with separate performance numbers
> >>> on.
> >>
> >> For the benefit of other readers, if we always search and choose starting from
> >> the first CPU in a core, then later searches will often need to traverse the first
> >> N busy CPU's to find the first idle CPU.  Choosing a random starting point avoids
> >> such bias.  It is probably a win for processors with 4 to 8 CPUs per core, and
> >> a slight but hopefully negligible loss for 2 CPUs per core, and I agree we need
> >> to see performance data for this as a separate patch to decide.  We have SPARC
> >> systems with 8 CPUs per core.
> > 
> > Which is why the current code already doesn't start from the first cpu
> > in the mask. We start at whatever CPU the task ran last on, which is
> > effectively 'random' if the system is busy.
> > 
> > So how is a per-cpu rotor better than that?
> 
> The current code is:
>         for_each_cpu(cpu, cpu_smt_mask(target)) {
> 
> For an 8-cpu/core processor, 8 values of target map to the same cpu_smt_mask.
> 8 different tasks will traverse the mask in the same order.

Ooh, the SMT loop.. yes that can be improved. But look at the other
ones, they do:

  for_each_cpu_wrap(cpu, sched_domain_span(), target)

so we look for an idle cpu in the LLC domain, and start iteration at
@target, which will (on average) be different for different CPUs, and
thus hopefully find different idle CPUs.

You could simple change the SMT loop to something like:

  for_each_cpu_wrap(cpu, cpu_smt_mask(target), target)

and see what that does.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ