[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1463553703.4012.29.camel@gmail.com>
Date: Wed, 18 May 2016 08:41:43 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Yuyang Du <yuyang.du@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Chris Mason <clm@...com>,
Ingo Molnar <mingo@...nel.org>,
Matt Fleming <matt@...eblueprint.co.uk>,
linux-kernel@...r.kernel.org
Subject: Re: sched: tweak select_idle_sibling to look for idle threads
On Wed, 2016-05-11 at 03:16 +0800, Yuyang Du wrote:
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3027,6 +3027,9 @@ void remove_entity_load_avg(struct sched
> >
> > static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq)
> > {
> > +> > > > if (sched_feat(LB_TIP_AVG_HIGH) && cfs_rq->load.weight > cfs_rq->runnable_load_avg*2)
> > +> > > > > > return cfs_rq->runnable_load_avg + min_t(unsigned long, NICE_0_LOAD,
> > +> > > > > > > > > > > > > > > > cfs_rq->load.weight/2);
> > > > > > return cfs_rq->runnable_load_avg;
> > }
>
> cfs_rq->runnable_load_avg is for sure no greater than (in this case much less
> than, maybe 1/2 of) load.weight, whereas load_avg is not necessarily a rock
> in gearbox that only impedes speed up, but also speed down.
BTW, the reason hack helped is that the long (30ms) sleep/run cycle of
the benchmark's default settings causes large amplitude sawtooth of
load numbers (~300 - ~700 range), dinging up load delta resolvability.
-Mike
Powered by blists - more mailing lists