lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161203232503.GJ20785@codeblueprint.co.uk>
Date:   Sat, 3 Dec 2016 23:25:03 +0000
From:   Matt Fleming <matt@...eblueprint.co.uk>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     peterz@...radead.org, mingo@...nel.org,
        linux-kernel@...r.kernel.org, Morten.Rasmussen@....com,
        dietmar.eggemann@....com, kernellwp@...il.com, yuyang.du@...el.com,
        umgwanakikbuti@...il.com
Subject: Re: [PATCH 1/2 v2] sched: fix find_idlest_group for fork

On Fri, 25 Nov, at 04:34:32PM, Vincent Guittot wrote:
> During fork, the utilization of a task is init once the rq has been
> selected because the current utilization level of the rq is used to set
> the utilization of the fork task. As the task's utilization is still
> null at this step of the fork sequence, it doesn't make sense to look for
> some spare capacity that can fit the task's utilization.
> Furthermore, I can see perf regressions for the test "hackbench -P -g 1"
> because the least loaded policy is always bypassed and tasks are not
> spread during fork.
> 
> With this patch and the fix below, we are back to same performances as
> for v4.8. The fix below is only a temporary one used for the test until a
> smarter solution is found because we can't simply remove the test which is
> useful for others benchmarks
> 
> @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>  
>  	avg_cost = this_sd->avg_scan_cost;
>  
> -	/*
> -	 * Due to large variance we need a large fuzz factor; hackbench in
> -	 * particularly is sensitive here.
> -	 */
> -	if ((avg_idle / 512) < avg_cost)
> -		return -1;
> -
>  	time = local_clock();
>  
>  	for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) {
> 

OK, I need to point out that I didn't apply the above hunk when
testing this patch series. But I wouldn't have expected that to impact
our fork-intensive workloads so much. Let me know if you'd like me to
re-run with it applied.

I don't see much of a difference, positive or negative, for the
majority of the test machines, it's mainly a wash.

However, the following 4-cpu Xeon E5504 machine does show a nice win,
with thread counts in the mid-range (note, the second column is number
of hackbench groups, where each group has 40 tasks),

hackbench-process-pipes
                        4.9.0-rc6             4.9.0-rc6             4.9.0-rc6
                        tip-sched      fix-fig-for-fork               fix-sig
Amean    1       0.2193 (  0.00%)      0.2014 (  8.14%)      0.1746 ( 20.39%)
Amean    3       0.4489 (  0.00%)      0.3544 ( 21.04%)      0.3284 ( 26.83%)
Amean    5       0.6173 (  0.00%)      0.4690 ( 24.02%)      0.4977 ( 19.37%)
Amean    7       0.7323 (  0.00%)      0.6367 ( 13.05%)      0.6267 ( 14.42%)
Amean    12      0.9716 (  0.00%)      1.0187 ( -4.85%)      0.9351 (  3.75%)
Amean    16      1.2866 (  0.00%)      1.2664 (  1.57%)      1.2131 (  5.71%)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ