[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1284231470.2251.52.camel@laptop>
Date: Sat, 11 Sep 2010 20:57:50 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tony Lindgren <tony@...mide.com>,
Mike Galbraith <efault@....de>
Subject: Re: [RFC patch 1/2] sched: dynamically adapt granularity with
nr_running
On Sat, 2010-09-11 at 13:37 -0400, Mathieu Desnoyers wrote:
Its not at all clear what or why you're doing what exactly.
What we used to have is:
period -- time in which each task gets scheduled once
This period was adaptive in that we had an ideal period
(sysctl_sched_latency), but since keeping to this means that each task
gets latency/nr_running time. This is undesired in that it means busy
systems will over-schedule due to tiny slices. Hence we also had a
minimum slice (sysctl_sched_min_granularity).
This yields:
period := max(sched_latency, nr_running * sched_min_granularity)
[ where we introduce the intermediate:
nr_latency := sched_latency / sched_min_granularity
in order to avoid the multiplication where possible ]
Now you introduce a separate preemption measure, sched_gran as:
sched_std_granularity; nr_running <= 8
sched_gran := {
max(sched_min_granularity, sched_latency / nr_running)
Which doesn't make any sense at all, because it will either be larger or
as large as the current sched_min_granularity.
And you break the above definition of period by replacing nr_latency by
8.
Not at all charmed, this look like random changes without conceptual
integrity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists