lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Sep 2010 20:23:55 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Tony Lindgren <tony@...mide.com>,
	Mike Galbraith <efault@....de>
Subject: [PATCH] sched: Improve latencies under load by decreasing minimum
 scheduling granularity


* Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:

> * Ingo Molnar (mingo@...e.hu) wrote:
> > 
> > * Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> > 
> > > * Linus Torvalds (torvalds@...ux-foundation.org) wrote:
> > > > On Mon, Sep 13, 2010 at 9:16 AM, Mathieu Desnoyers
> > > > <mathieu.desnoyers@...icios.com> wrote:
> > > > >
> > > > > OK, the long IRC discussions we just had convinced me that the current scheme
> > > > > takes things into account by adapting the granularity dynamically, but also got
> > > > > me to notice that check_preempt seems to compare vruntime with wall time, which
> > > > > is utterly incorrect. So maybe all my patch was doing was to expose this bug:
> > > > 
> > > > Do you have latency numbers for this patch?
> > > 
> > > Sure, see below,
> > > 
> > > In addition to this patch, [...]
> > 
> > Note, which is a NOP for your latency workload.
> > 
> > > [...] I also used Peter's approach of reducing the minimum granularity
> > 
> > Ok, that's the very first patch i sent yesterday morning - so we also 
> > have my numbers that it reduces latencies.
> > 
> > To move things along i'll apply it with your Reported-by and Acked-by 
> > line, ok?
> > 
> > We can also work on the other, more complex things after that, but first 
> > lets make some progress on the latency front ...
> 
> Yep, that's fine with me.
> 
> Thanks!

You are welcome!

Linus, Mathieu, you can test the granularity reduction patch via:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched/urgent

Patch also attached below.

Note, i'd like to keep this separate from the check_preempt() change - 
which only affects reniced tasks and isnt essential to these tests. (we 
want such things to be in separate commits, for bisectability)

 Thanks,

	Ingo

------------------>
Ingo Molnar (1):
      sched: Improve latencies under load by decreasing minimum scheduling granularity


 kernel/sched_fair.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 9b5b4f8..a171138 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -54,13 +54,13 @@ enum sched_tunable_scaling sysctl_sched_tunable_scaling
  * Minimal preemption granularity for CPU-bound tasks:
  * (default: 2 msec * (1 + ilog(ncpus)), units: nanoseconds)
  */
-unsigned int sysctl_sched_min_granularity = 2000000ULL;
-unsigned int normalized_sysctl_sched_min_granularity = 2000000ULL;
+unsigned int sysctl_sched_min_granularity = 750000ULL;
+unsigned int normalized_sysctl_sched_min_granularity = 750000ULL;
 
 /*
  * is kept at sysctl_sched_latency / sysctl_sched_min_granularity
  */
-static unsigned int sched_nr_latency = 3;
+static unsigned int sched_nr_latency = 8;
 
 /*
  * After fork, child runs first. If set to 0 (default) then

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ