[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1274086140.5605.3719.camel@twins>
Date: Mon, 17 May 2010 10:49:00 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: commit e9e9250b: sync wakeup bustage when waker is an RT task
On Mon, 2010-05-17 at 06:38 +0200, Mike Galbraith wrote:
> What would be the harm/consequence of restoring RT tasks to rq->load so
> the wake_affine()::sync logic just worked as before without hackery?
Well, you'd have to constantly adjust the task weight of RT tasks to
reflect their actual consumption. Not really feasible.
So the proportional stuff works like:
slice_i = w_i / (\Sum_j w_j) * dt
Giving a RT task a sensible weight we'd have to reverse that:
w_i = slice_i/dt * (\Sum_j w_j)
which is something that depends on the rq->load, so every time you
change the rq->load you'd have to recompute the weight of all the RT
tasks, which again changes the rq->load (got a head-ache already? :-)
> The weight is a more or less random number, but looking around, with
> them excluded, avg_load_per_task is lowered when RT tasks enter the
> system, and rq->load[] misses their weight. (Dunno what effect it has
> on tg shares).
Well, those things are more or less a 'good' thing, it makes it purely
about sched_fair.
So the thing to do I think is to teach wake_affine about cpu_power,
because that is what includes the RT tasks.
The proper comparison of rq weights (like the regular load balancer
already does) is:
A->load / A->cpu_power ~ B->load / B->cpu_power
The lower the cpu_power of a particular cpu, the less processing
capacity it has, the smaller its share of the total weight should be to
provide equal work for each task.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists