lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 9 Dec 2016 13:18:03 +0000
From:   Matt Fleming <matt@...eblueprint.co.uk>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     peterz@...radead.org, mingo@...nel.org,
        linux-kernel@...r.kernel.org, Morten.Rasmussen@....com,
        dietmar.eggemann@....com, kernellwp@...il.com,
        yuyang.du@...el.comc, umgwanakikbuti@...il.com
Subject: Re: [PATCH 1/2 v3] sched: fix find_idlest_group for fork

On Thu, 08 Dec, at 05:56:53PM, Vincent Guittot wrote:
> During fork, the utilization of a task is init once the rq has been
> selected because the current utilization level of the rq is used to set
> the utilization of the fork task. As the task's utilization is still
> null at this step of the fork sequence, it doesn't make sense to look for
> some spare capacity that can fit the task's utilization.
> Furthermore, I can see perf regressions for the test "hackbench -P -g 1"
> because the least loaded policy is always bypassed and tasks are not
> spread during fork.
> 
> With this patch and the fix below, we are back to same performances as
> for v4.8. The fix below is only a temporary one used for the test until a
> smarter solution is found because we can't simply remove the test which is
> useful for others benchmarks
> 
> @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> 
>  	avg_cost = this_sd->avg_scan_cost;
> 
> -	/*
> -	 * Due to large variance we need a large fuzz factor; hackbench in
> -	 * particularly is sensitive here.
> -	 */
> -	if ((avg_idle / 512) < avg_cost)
> -		return -1;
> -
>  	time = local_clock();
> 
>  	for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) {
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> Acked-by: Morten Rasmussen <morten.rasmussen@....com>
> ---
>  kernel/sched/fair.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Tested-by: Matt Fleming <matt@...eblueprint.co.uk>
Reviewed-by: Matt Fleming <matt@...eblueprint.co.uk>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ