[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CF87C14.8000708@google.com>
Date: Thu, 02 Dec 2010 21:11:48 -0800
From: Paul Turner <pjt@...gle.com>
To: linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...e.hu>, Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linus Torvalds <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4] sched: automated per session task groups
On 11/30/10 22:16, Mike Galbraith wrote:
> On Tue, 2010-11-30 at 19:39 -0800, Paul Turner wrote:
>> On 11/28/10 06:24, Mike Galbraith wrote:
>>>
>>> Something else is seriously wrong though. 36.1 with attached (plus
>>> sched, cgroup: Fixup broken cgroup movement) works a treat, whereas
>>> 37.git and tip with fixlet below both suck rocks. With a make -j40
>>> running, wakeup-latency is showing latencies of>100ms, amarok skips,
>>> mouse lurches badly.. generally horrid. Something went south.
>>
>> I'm looking at this.
>>
>> The share:share ratios looked good in static testing, but perhaps we
>> need a little more wake-up boost to improve interactivity.
>
> Yeah, feels like a wakeup issue. I too did a (brief) static test, and
> that looked ok.
>
> -Mike
>
Hey Mike,
Does something like the below help?
We're quick to drive the load_contribution up (to avoid over-commit).
However on sleepy workloads this results in lots of weight being
stranded (since it reaches maximum contribution instantaneously but
decays slowly) as the thread migrates between cpus.
Avoid this by averaging "up" in the wake-up direction as well as the sleep.
We also get a boost from the fact that we use the instantaneous weight
in computing the actual received shares.
I actually don't have a desktop setup handy to test "interactivity" (sad
but true -- working on grabbing one). But it looks better on under
synthetic load.
- Paul
===================================================================
--- tip.orig/kernel/sched_fair.c
+++ tip/kernel/sched_fair.c
@@ -743,12 +743,19 @@ static void update_cfs_load(struct cfs_r
return;
now = rq_of(cfs_rq)->clock;
- delta = now - cfs_rq->load_stamp;
+
+ if (likely(cfs_rq->load_stamp))
+ delta = now - cfs_rq->load_stamp;
+ else {
+ /* avoid large initial delta and initialize load_period */
+ delta = 1;
+ cfs_rq->load_stamp = 1;
+ }
/* truncate load history at 4 idle periods */
if (cfs_rq->load_stamp > cfs_rq->load_last &&
now - cfs_rq->load_last > 4 * period) {
- cfs_rq->load_period = 0;
+ cfs_rq->load_period = period/2;
cfs_rq->load_avg = 0;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists