lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0708191936110.1817@scrub.home>
Date:	Tue, 21 Aug 2007 00:19:47 +0200 (CEST)
From:	Roman Zippel <zippel@...ux-m68k.org>
To:	Ingo Molnar <mingo@...e.hu>
cc:	Willy Tarreau <w@....eu>, Michael Chang <thenewme91@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andi Kleen <andi@...stfloor.org>,
	Mike Galbraith <efault@....de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: CFS review

Hi,

On Sat, 11 Aug 2007, Ingo Molnar wrote:

> the only relevant thing that comes to mind at the moment is that last 
> week Peter noticed a buggy aspect of sleeper bonuses (in that we do not 
> rate-limit their output, hence we 'waste' them instead of redistributing 
> them), and i've got the small patch below in my queue to fix that - 
> could you give it a try?

It doesn't make much of a difference. OTOH if I disabled the sleeper code 
completely in __update_curr(), I get this:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3139 roman     20   0  1796  344  256 R 21.7  0.3   0:02.68 lt
 3138 roman     20   0  1796  344  256 R 21.7  0.3   0:02.68 lt
 3137 roman     20   0  1796  520  432 R 21.7  0.4   0:02.68 lt
 3136 roman     20   0  1532  268  216 R 34.5  0.2   0:06.82 l

Disabling this code completely via sched_features makes only a minor 
difference:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3139 roman     20   0  1796  344  256 R 20.4  0.3   0:09.94 lt
 3138 roman     20   0  1796  344  256 R 20.4  0.3   0:09.94 lt
 3137 roman     20   0  1796  520  432 R 20.4  0.4   0:09.94 lt
 3136 roman     20   0  1532  268  216 R 39.1  0.2   0:19.20 l

> this is just a blind stab into the dark - i couldnt see any real impact 
> from that patch in various workloads (and it's not upstream yet), so it 
> might not make a big difference.

Can we please skip to the point, where you try to explain the intention a 
little more?
If I had to guess that this is supposed to keep the runtime balance, then 
it would be better to use wait_runtime to adjust fair_clock, from where it 
would be evenly distributed to all tasks (but this had to be done during 
enqueue and dequeue). OTOH this also had then a consequence for the wait 
queue, as fair_clock is used to calculate fair_key.
IMHO current wait_runtime should have some influence in calculating the 
sleep bonus, so that wait_runtime doesn't constantly overflow for tasks 
which only run occasionally.

bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ