[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150727040420.GC3328@fixme-laptop.cn.ibm.com>
Date: Mon, 27 Jul 2015 12:04:20 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Yuyang Du <yuyang.du@...el.com>
Cc: mingo@...nel.org, peterz@...radead.org,
linux-kernel@...r.kernel.org, pjt@...gle.com, bsegall@...gle.com,
morten.rasmussen@....com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, umgwanakikbuti@...il.com,
len.brown@...el.com, rafael.j.wysocki@...el.com,
arjan@...ux.intel.com, fengguang.wu@...el.com
Subject: Re: [PATCH v10 6/7] sched: Provide runnable_load_avg back to cfs_rq
Hi Yuyang,
On Mon, Jul 27, 2015 at 03:56:34AM +0800, Yuyang Du wrote:
> On Mon, Jul 27, 2015 at 11:21:15AM +0800, Boqun Feng wrote:
> > Hi Yuyang,
> >
> > On Mon, Jul 27, 2015 at 02:43:25AM +0800, Yuyang Du wrote:
> > > Hi Boqun,
> > >
> > > On Tue, Jul 21, 2015 at 06:29:56PM +0800, Boqun Feng wrote:
> > > > The point is that you have already tracked the sum of runnable_load_avg
> > > > and blocked_load_avg in cfs_rq->avg.load_avg. If you're going to track
> > > > part of the sum, you'd better track the one that's updated less
> > > > frequently, right?
> > > >
> > > > Anyway, this idea just comes into my mind. I wonder which is udpated
> > > > less frequently myself too. ;-) So I ask to see whether there is
> > > > something we can improve.
> > >
> > > Actually, this is not the point.
> > >
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > 1) blocked load is more "difficult" to track, hint, migrate.
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I may not get your point here? Are you saying my patch fails to handle
the migration or are you just telling me that blocked load tracking need
to take migration into consideration?
If it's the latter one, I want to say that, with blocked load or not, we
have to handle load_avg in migrations, so *adding* some code to handle
blocked load is not a big deal.
Please consider this piece of code in update_cfs_rq_load_avg(), which
decays and updates blocked_load_avg.
@@ -2656,6 +2670,12 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
sa->load_avg = max_t(long, sa->load_avg - r, 0);
sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
+
+ decay_cfs_rq_blocked_load(sa->last_update_time, cfs_rq);
+ cfs_rq->blocked_load_avg = max_t(long,
+ cfs_rq->blocked_load_avg - r, 0);
+ cfs_rq->blocked_load_sum = max_t(s64,
+ cfs_rq->blocked_load_avg - r * LOAD_AVG_MAX, 0);
}
Regards,
Boqun
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists