lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1193919608.27652.277.camel@twins>
Date:	Thu, 01 Nov 2007 13:20:08 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	vatsa@...ux.vnet.ibm.com
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Mike Galbraith <efault@....de>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>
Subject: Re: [PATCH 2/6] sched: make sched_slice() group scheduling savvy

On Thu, 2007-11-01 at 13:03 +0100, Peter Zijlstra wrote:
> On Thu, 2007-11-01 at 12:58 +0100, Peter Zijlstra wrote:
> 
> > > sched_slice() is about lantecy, its intended purpose is to ensure each
> > > task is ran exactly once during sched_period() - which is
> > > sysctl_sched_latency when nr_running <= sysctl_sched_nr_latency, and
> > > otherwise linearly scales latency.
> 
> The thing that got my brain in a twist is what to do about the non-leaf
> nodes, for those it seems I'm not doing the right thing - I think.

Ok, suppose a tree like so:


level 2                   cfs_rq
                       A           B

level 1             cfs_rqA     cfs_rqB
                     A0        B0 - B99


So for sake of determinism, we want to impose a period in which all
level 1 tasks will have ran (at least) once.

Now what sched_slice() does is calculate the weighted proportion of the
given period for each task to run, so that each task runs exactly once.

Now level 2, can introduce these large weight differences, which in this
case result in 'lumps' of time.

In the given example above the weight difference is 1:100, which is
already at the edges of what regular nice levels could do.

How about limiting the max output of sched_slice() to
sysctl_sched_latency in order to break up these large stretches of time?

Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -341,7 +341,7 @@ static u64 sched_slice(struct cfs_rq *cf
 		do_div(slice, cfs_rq->load.weight);
 	}
 
-	return slice;
+	return min_t(u64, sysctl_sched_latency, slice);
 }
 
 /*



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ