lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Feb 2018 16:30:03 -0800
From:   Subhra Mazumdar <subhra.mazumdar@...cle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Steven Sistare <steven.sistare@...cle.com>,
        linux-kernel@...r.kernel.org, mingo@...hat.com,
        dhaval.giani@...cle.com
Subject: Re: [RESEND RFC PATCH V3] sched: Improve scalability of
 select_idle_sibling using SMT balance



On 02/06/2018 01:12 AM, Peter Zijlstra wrote:
> On Mon, Feb 05, 2018 at 02:09:11PM -0800, Subhra Mazumdar wrote:
>> The pseudo random is also used for choosing a random core to compare with,
>> how will transposing achieve that?
> Not entirely sure what your point is. Current code doesn't compare to
> just _one_ other core, and I don't think we'd ever want to do that.
>
> So currently select_idle_core() will, if there is an idle core, iterate
> the whole thing trying to find it. If it fails, it clears the
> 'have_idle_core' state.
>
> select_idle_cpu, which we'll fall back to, will limit the scanning based
> on the average idle time.
>
>
> The crucial point however, is that concurrent wakeups will not, on
> average, do the same iteration because of the target offset.
I meant the SMT balance patch. That does comparison with only one other
random core and takes the decision in O(1). Any potential scan of all cores
or cpus is O(n) and doesn't scale and will only get worse in future. That
applies to both select_idle_core() and select_idle_cpu().

Is there any reason this randomized approach is not acceptable even if
benchmarks show improvement? Are there other benchmarks I should try?

Also your suggestion to keep the SMT utilization but still do a 
traversal of cores
in select_idle_core() while remembering the least loaded core will still 
have
the problem of potentially traversing all cores. I can compare this with 
a core
level only SMT balancing, is that useful to decide? I will also test on 
SPARC
machines with higher degree of SMT.

You had also mentioned to do it for only SMT >2, not sure I understand why
as even for SMT=2 (intel) benchmarks show improvement. This clearly shows
the scalability problem.

Thanks,
Subhra

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ