[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070612102622.GA925@in.ibm.com>
Date: Tue, 12 Jun 2007 15:56:22 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: "Dmitry Adamushko" <dmitry.adamushko@...il.com>
Cc: vatsa@...ux.vnet.ibm.com, "Ingo Molnar" <mingo@...e.hu>,
"Nick Piggin" <nickpiggin@...oo.com.au>, efault@....de,
kernel@...ivas.org, containers@...ts.osdl.org,
ckrm-tech@...ts.sourceforge.net, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, pwil3058@...pond.net.au,
tingy@...umass.edu, tong.n.li@...el.com, wli@...omorphy.com,
linux-kernel@...r.kernel.org, balbir@...ibm.com
Subject: Re: [RFC][PATCH 4/6] Fix (bad?) interactions between SCHED_RT and SCHED_NORMAL tasks
On Tue, Jun 12, 2007 at 11:03:36AM +0200, Dmitry Adamushko wrote:
> I had an idea of per-sched-class 'load balance' calculator. So that
> update_load() (as in your patch) would look smth like :
>
> ...
> struct sched_class *class = sched_class_highest;
> unsigned long total = 0;
>
> do {
> total += class->update_load(..., now);
> class = class->next;
> } while (class);
> ...
>
> and e.g. update_load_fair() would become a fair_sched_class ::
> update_load().
>
> That said, all the sched_classes would report a load created by their
> entities (tasks) over the last sampling period. Ideally, the
> calculation should not be merely based on the 'raw_weighted_load' but
> rather done in a similar way to update_load_fair() as in v17.
I like this idea. It neatly segregates load calculation across classes.
It effectively replaces what update_load() function I introduced in
Patch #4.
Btw what will update_load_rt() return?
> > static void entity_tick(struct lrq *lrq, struct sched_entity *curr)
> > {
> > struct sched_entity *next;
> > struct rq *rq = lrq_rq(lrq);
> > u64 now = __rq_clock(rq);
> >
> >+ /* replay load smoothening for all ticks we lost */
> >+ while (time_after_eq64(now, lrq->last_tick)) {
> >+ update_load_fair(lrq);
> >+ lrq->last_tick += TICK_NSEC;
> >+ }
>
> I think, it won't work properly this way. The first call returns a
> load for last TICK_NSEC and all the consequent ones report zero load
> ('this_load = 0' internally)..
mm ..
exec_delta64 = this_lrq->delta_exec_clock + 1;
this_lrq->delta_exec_clock = 0;
So exec_delta64 (and fair_delta64) should be min 1 in successive calls. How can that lead to this_load = 0?
The idea behind 'replay lost ticks' is to avoid load smoothening of
-every- lrq -every- tick. Lets say that there are ten lrqs
(corresponding to ten different users). We load smoothen only the currently
active lrq (whose task is currently running). Other lrqs load get smoothened
as soon as they become active next time (thus catching up with all lost ticks).
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists