lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 May 2013 15:19:55 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Alex Shi <alex.shi@...el.com>
Cc:	mingo@...hat.com, tglx@...utronix.de, akpm@...ux-foundation.org,
	arjan@...ux.intel.com, bp@...en8.de, pjt@...gle.com,
	namhyung@...nel.org, efault@....de, morten.rasmussen@....com,
	vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
	preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
	linux-kernel@...r.kernel.org, len.brown@...el.com,
	rafael.j.wysocki@...el.com, jkosina@...e.cz,
	clark.williams@...il.com, tony.luck@...el.com,
	keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [PATCH v4 6/6] sched: consider runnable load average in
 effective_load

> @@ -3120,6 +3124,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  	struct task_group *tg;
>  	unsigned long weight;
>  	int balanced;
> +	int runnable_avg;
>  
>  	idx	  = sd->wake_idx;
>  	this_cpu  = smp_processor_id();
> @@ -3135,13 +3140,19 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  	if (sync) {
>  		tg = task_group(current);
>  		weight = current->se.load.weight;
> +		runnable_avg = current->se.avg.runnable_avg_sum * NICE_0_LOAD
> +				/ (current->se.avg.runnable_avg_period + 1);
>  
> -		this_load += effective_load(tg, this_cpu, -weight, -weight);
> -		load += effective_load(tg, prev_cpu, 0, -weight);
> +		this_load += effective_load(tg, this_cpu, -weight, -weight)
> +				* runnable_avg >> NICE_0_SHIFT;
> +		load += effective_load(tg, prev_cpu, 0, -weight)
> +				* runnable_avg >> NICE_0_SHIFT;
>  	}


I'm fairly sure this is wrong; but I haven't bothered to take pencil to paper.

I think you'll need to insert the runnable avg load and make sure
effective_load() uses the right sums itself.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ