lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 06 May 2013 15:29:20 +0530
From:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To:	Alex Shi <alex.shi@...el.com>
CC:	Paul Turner <pjt@...gle.com>,
	Michael Wang <wangyun@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Borislav Petkov <bp@...en8.de>,
	Namhyung Kim <namhyung@...nel.org>,
	Mike Galbraith <efault@....de>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH v5 7/7] sched: consider runnable load average in effective_load

On 05/06/2013 03:05 PM, Alex Shi wrote:
> On 05/06/2013 05:06 PM, Paul Turner wrote:
>> I don't think this is a good idea:
>>
>> The problem with not using the instantaneous weight here is that you
>> potentially penalize the latency of interactive tasks (similarly,
>> potentially important background threads -- e.g. garbage collection).
>>
>> Counter-intuitively we actually want such tasks on the least loaded
>> cpus to minimize their latency.  If the load they contribute ever
>> becomes more substantial we trust that periodic balance will start
>> taking notice of them.
> 
> Sounds reasonable. Many thanks for your input, Paul!
> 
> So, will use the seconds try. :)
>>
>> [ This is similar to why we have to use the instantaneous weight in
>> calc_cfs_shares. ]
>>
> 
> 

Yes thank you very much for the inputs Paul :)

So Alex, Michael looks like this is what happened.

1. The effective_load() as it is, uses instantaneous loads to calculate
the CPU shares before and after a new task can be woken up on the given cpu.

2. With my patch, I modified it to use runnable load average while
calculating the CPU share *after* a new task could be woken up and
retained instantaneous load to calculate the CPU share before a new task
could be woken up.

3. With the first patch of Alex, he has used runnable load average while
calculating the CPU share both before and after a new task could be
woken up on the given CPU.

4.The suggestions that Alex gave:

Suggestion1: Would change the CPU share calculation to use runnable load
average all the time.

Suggestion2: Did opposite of point 2 above,it used runnable load average
while calculating the CPU share *before* a new task has been woken up
while it retaining the instantaneous weight to calculate the CPU share
after a new task could be woken up.

So since there was no uniformity in the calculation of CPU shares in
approaches 2 and 3, I think it caused a regression. However I still
don't understand how approach 4-Suggestion2 made that go away although
there was non-uniformity in the CPU shares calculation.

But as Paul says we could retain the usage of instantaneous loads
wherever there is calculation of CPU shares for the reason he mentioned
and leave effective_load() and calc_cfs_shares() untouched.

This also brings forth another question,should we modify wake_affine()
to pass the runnable load average of the waking up task to effective_load().

What do you think?


Thanks

Regards
Preeti U Murthy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ