lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Feb 2013 14:06:47 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Mike Galbraith <efault@....de>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Michael Wang <wangyun@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Paul Turner <pjt@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>, alex.shi@...el.com,
	Ram Pai <linuxram@...ibm.com>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
	Namhyung Kim <namhyung@...nel.org>
Subject: Re: [RFC PATCH v3 0/3] sched: simplify the select_task_rq_fair()


* Mike Galbraith <efault@....de> wrote:

> > > No, that's too high, you loose too much of the pretty 
> > > face. [...]
> > 
> > Then a logical proportion of it - such as half of it?
> 
> Hm.  Better would maybe be a quick boot time benchmark, and 
> use some multiple of your cross core pipe ping-pong time?  
> That we know is a complete waste of cycles, because almost all 
> cycles are scheduler cycles with no other work to be done, 
> making firing up another scheduler rather pointless.  If we're 
> approaching that rate, we're approaching bad idea.

Well, one problem with such dynamic boot time measurements is 
that it introduces a certain amount of uncertainty that persists 
for the whole lifetime of the booted up box - and it also sucks 
in any sort of non-deterministic execution environment, such as 
virtualized systems.

I think it might be better to measure the scheduling rate all 
the time, and save the _shortest_ cross-cpu-wakeup and 
same-cpu-wakeup latencies (since bootup) as a reference number. 

We might be able to pull this off pretty cheaply as the 
scheduler clock is running all the time and we have all the 
timestamps needed.

Pretty quickly after bootup this 'shortest latency' would settle 
down to a very system specific (and pretty accurate) value.

[ One downside would be an increased sensitivity to the accuracy
  and monotonicity of the scheduler clock - but that's something 
  we want to improve on anyway - and 'worst case' we get too 
  short latencies and we are where we are today. So it can only 
  improve the situation IMO. ]

Would you be interested in trying to hack on an auto-tuning 
feature like this?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ